1 2 3 4
alfadriver
alfadriver MegaDork
3/29/25 10:08 a.m.

We all know AI is the in thing for tech right now, but when I see what it delivers, I really wonder how that actually makes my life better.

Before I get too far, I will preface this with the smart phone conversation we had here about a decade ago- I was a doubter that it was as good as everyone said, but in the end, I was wrong.  And given it's capability compared to other options, it's really good.

I'm not really solid on what AI delivers right now- supposedly different writing of stuff, or picture making, or something like that- I kind of call those lazy helpers.  Much of that is based on how a real person edited my writing to both make it much better and still sound like me (thank you, again, David).  AI generated pictures are not really something appealing to me- and they are not all that great to look at when used in what I would consider the wrong way (the history channel regularly uses AI to make fake pictures to represent historical people, even when they say it's a generated picture). 

The other thing it supposedly does is better searches, with the top lines of a search.  But one scroll down shows the same information in the first few links.  So it's really no better.

Never tried things like ChatGPT, as I don't know what it's supposed to even deliver.

And my experience at work using smart data mining was less than impressive.

So am I missing something, or are the tech companies inflating this a little too far? 

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/29/25 10:46 a.m.
alfadriver said:

So am I missing something, or are the tech companies inflating this a little too far? 

Yes. Yes to both.

There are valuable and viable use cases to "AI", but overwhelmingly it's being pushed into places where it is not very useful or even potentially harmful - largely by people who understand it the least.

Like - my wife (who is for laymans' purposes a "programmer") is regularly being pushed by management that she should "Use AI more." For what? How? "Just use AI more!"

Firstly - "AI" is not an artificial intelligence. The current "AI" systems are predictive, generative, algorithms. They look at a set of data and output their best guess of what information should come next. They are incredibly good at taking in a BUNCH of information and producing a narrow answer. This can be useful when it is being controlled by someone who actually understands these things and can interpret this data better. "AI" algorithms have proved invaluable for doing things like identifying cancer cells and such.

They would be really time saving for other applications where people need to sift through lots of messy and chaotic data: identifying controlled objects in airport security, flagging potential fraudulent insurance claims, etc.

They largely get used to pad out bullE36 M3 that should just be eliminated. My brother works in defense contracting (scoping and selling military technology to the government). He uses AI to write out proposal documents. But mostly what that does is take his simple, basic, and clear information that he wants to convey - "We have this item. It does these things. We want to sell it to this department for this much." - and pads it out to pages of B.S. that someone else will then plug into an AI program to simplify it back down to his original simple statement.

AI does not write better than a person. It writes more *precisely* than a person, which is very different.

I've seen other interesting artistic uses for it, but largely, those are people finding ways to exploit how bad AI outputs are to interesting but unintended ways.

I recently discovered the music videos of Czart, which uses how horrible AI videos are to produces hilariously disturbing horror music videos:

 

GameboyRMH
GameboyRMH GRM+ Memberand MegaDork
3/29/25 10:46 a.m.

Tech companies are inflating this a lot too far from what's been made up to now. I was at the bleeding edge of smartphones from the Palm/Handspring Treo days and got an early testing mobile data connection for it, but I also don't find AI very useful. It's fun for generating pictures for memes etc, but I wouldn't use it for writing because I don't like the AI writing style or its capacity to hallucinate, and using it for coding seems like just copy-pasting snippets from StackOverflow at high speed and with even less knowledge of what you're doing.

The most interesting uses of this tech are in high-speed image/audio recognition and possibly in making a suped-up search engine with Retrieval Augmented Generation. The problem with both of these is hallucinations. For example, the AI-powered medical transcription software has already been found to hallucinate extra words or sentences and insert it into the text. Like death by ransomware, it's likely that the first death by AI hallucination has already happened in a hospital but we just don't know exactly who it was. If an RAG setup hallucinates some finding from searching a data set, then all you've built is a high-speed misinformation machine, and the only way to know if it's happening is to do the same work yourself.

Finally, the commercial viability of all this is highly questionable. AI services right now are mostly subsidized by investment money to fund the ridiculously power-hungry data centers this runs on. Even if it were actually useful, it may not be affordable. The models you can run on a gaming PC at home mostly aren't up to the level of the top-tier commercial offerings, some like DeepSeek R1 come close but its efficiency comes with some big downsides.

OHSCrifle
OHSCrifle GRM+ Memberand PowerDork
3/29/25 10:53 a.m.

I heard on a podcast yesterday somebody took a photograph of a thermostat's wiring and asked ChatGPT to watch and explain how to reconnect everything.

And it did

🤯

wae
wae UltimaDork
3/29/25 10:58 a.m.

I was at conference a few months ago and, of course, AI was a big topic.  The joke was made that nobody knows what to do with it yet, but everybody is demanding that for every product we have to go ahead and take out all the blockchain and stuff it full of AI.

At their hearts, the large language models are statistical words algorithms.  Yes, there is more going on there, but what they are good at is computing the statistically most probably word that would follow the previous word.  And they are actually quite good at that.  The problem is that we, as humans, are very susceptible to believing wrongness that is delivered with high confidence. 

Here's an analogy:  Bob is a physicist who has a PhD in whatever physicists overeducate themselves on and knows everything there is to know about how light particles move through cardboard and, since it's my story, his theories are all 100% correct and accurate.  But he has so much information on the topic that when he is asked to give a 10 minute speech to a kindergarten class, it's not going to turn out well.  Fred is a speechwriter who is trained and practiced and amazing at his job and by working with the speaker can produce amazingly easy to understand speeches for audiences of any level.  Fred will sit down with Bob and pick through his brain to distill the physics into a speech that the kindergarten class will enjoy and understand.  Fred doesn't know E36 M3 about physics, though.  So if Fred is told to also incorporate information from Sam, but Sam is a crackpot who has completely wrong theories, Fred doesn't know enough to be able to give weight to one expert or the other.  He'll just string the words together so they sound good.

The other problem that AI LLM has is that it is susceptible to hallucinations.  In my view, the most dangerous of those are when the system has to make inferences because it can't find an answer and that is driven by the programming and design of the LLM.  For example, I asked it for the source of a paraphrased quote that I remember from a magazine article a while ago.  It couldn't find it, but it very confidently told me that it was attributed to Steve Jobs.  Now, I was pretty sure that was wrong, so I asked it who attributed it to Steve Jobs.  It got a little squirrelly and changed its mind and said that it thought that it sounded like something he would have said.  I pressed it further and it finally came back and said that it was sorry, but it actually couldn't find anything about that quote at all. 

Somewhere in there, they need to program the LLM to be able to fill in gaps and make inferences.  When they do that, there are decisions that have to be made about how aggressively it is going to fill in gaps - if it didn't fill them in at all, it would be almost impossible to get any information at all from them.

I have been using both copilot and chatgpt for help with a software development project I've been working on as well as with doing some resume work and it has been pretty good with those.

For the coding help, it's awesome at being able to debug things that would take me a long time to find.  For example, I was trying to output an HTML table as a PDF using a specific module and while the HTML would render properly in a browser, the PDF wouldn't render at all.  I stared at it for a while and couldn't find the problem.  When I fed the code into chatgpt, however, it instantly noticed that I had a typo in an HTML tag.  The browser just handled it, but the PDF module had a silent choke on it.

For resume and writing help, I've used it to prompt me to write.  For example, I had it walk me through the process of developing my STAR stories by getting it to ask me the questions I needed to answer.  And then, I could type my stream of consciousness back in and it would structure it for me into bullet points that I would then use to write paragraphs.  Same stuff that I usually do for myself when I write, but it speeds up the process a bit.

What it isn't good at is validating it's own training data.  It can't make a value judgement, it just knows what it's training data usually does.  That's how you have LLMs rickrolling people.  It doesn't know WHY people are always answering questions with a link to this YouTube video, but it happens so much that it must be the statistically most likely thing that should be sent.

What's going to get wicked, though, is when adoption of agentic AI starts to snowball.  These are AI models that don't string words together but that actually perform an action.  For example, imagine an agentic AI that learns that the most statistically likely thing for it to do when it senses that it is 40 degrees outside, the motion sensors in the house are quiet, and the left garage door begins to open is to turn on the lights in the living room, raise the temperature in the house to 76 degrees, turn on the fireplace, and turn on the TV and start the F1 TV app.  Or, in a business sense, when it detects that the CFO just sent an email to someone they've never communicated with in the shipping department and there's some mention about sending payments, it will block the email, alert security, and initiate a lock on the last few snapshots of data in case a restore has to happen.

GameboyRMH
GameboyRMH GRM+ Memberand MegaDork
3/29/25 10:59 a.m.

In reply to Beer Baron 🍺 :

LOL, weirdo music videos are definitely a good application laugh

This is another one. Seems like it could've been filmed, until someone flips back-to-front or something. Warning: Not exactly work safe.

 

alfadriver
alfadriver MegaDork
3/29/25 11:01 a.m.
Beer Baron 🍺 said:
alfadriver said:

So am I missing something, or are the tech companies inflating this a little too far? 

Yes. Yes to both.

 

Firstly - "AI" is not an artificial intelligence. The current "AI" systems are predictive, generative, algorithms. They look at a set of data and output their best guess of what information should come next. They are incredibly good at taking in a BUNCH of information and producing a narrow answer. This can be useful when it is being controlled by someone who actually understands these things and can interpret this data better. "AI" algorithms have proved invaluable for doing things like identifying cancer cells and such.

 

Just wanted to counter this part- I actually used the AI at work to look at a bunch of data to better understand the physics of what is going on, and it failed pretty badly.  It wasn't tasked to know the physics, but to find relationships in the data that we were missing- like what was really influencing good or bad air fuel control that we were missing.  The machine learning we were using could not even detect things that had a direct influence on it.  That's when I really started to question this machine learning and AI for what it could really do.  We did this for many months, with a lot of technical help from the group that was developing the tools- so we had a lot of effort put into this.

Maybe things have gotten a lot better, but I was left wanting.  

From what I can see, the best learning is when a massive amount of human effort is put into the system, and the AI just uses their data to determine what is what later.  

alfadriver
alfadriver MegaDork
3/29/25 11:09 a.m.

In reply to wae :

Question on the coding typo- if you had a "spelling" system for coding, would that be capable of doing the same thing?  Like when I type here, the web page can see when I make a typo in english (like that past word is underlined because of the E not being caped).  Do you need AI to find that in code?  Or could there be a coding specific spell check that would do it better?

As for the last paragraph of things that AI could do, there are systems that are not AI that can already do all of that.  How does AI make it better?  That's where I'm missing it.

You kind of hint toward auto-coding in all of that too- and I really despise that quite a lot.  It always takes up more space, runs slower, and is really hard to read for the end user of the code.  Life was a lot better when code space really mattered.

(for that matter, I also miss the benefit of IOT for most of that, too- the only benefit I could possibly see is turning the t-stat on when our plane touches down back at home- but I turn my internet off when I'm gone for a reason)

GameboyRMH
GameboyRMH GRM+ Memberand MegaDork
3/29/25 11:24 a.m.

In reply to alfadriver :

Yes there are spellcheck-like systems for coding that will check basic function names and basic syntax and highlight things that are wrong. These can catch typos and syntax errors, but not logical or design errors in syntactically correct code, while AI may be able to find those. There are AI systems that can scan code for buffer overflows etc, but they also come back with a lot of false positives that are bothering developers a lot these days.

GameboyRMH
GameboyRMH GRM+ Memberand MegaDork
3/29/25 11:26 a.m.

In reply to wae :

I think you mixed up some names in your analogy.

alfadriver
alfadriver MegaDork
3/29/25 11:35 a.m.
GameboyRMH said:

In reply to alfadriver :

Yes there are spellcheck-like systems for coding that will check basic function names and basic syntax and highlight things that are wrong. These can catch typos and syntax errors, but not logical or design errors in syntactically correct code, while AI may be able to find those. There are AI systems that can scan code for buffer overflows etc, but they also come back with a lot of false positives that are bothering developers a lot these days.

That's interesting.  Not because AI can find the error, but there have been English grammar checkers available for writing for over 30 years- I was able to use them in school in the 80s.  So why hasn't something similar like that developed for coding?  

 

(still, that doesn't help me think that getting Apple Inteligence is a feature that's actually helpful... LOL)

GameboyRMH
GameboyRMH GRM+ Memberand MegaDork
3/29/25 11:37 a.m.

In reply to alfadriver :

Grammar checkers won't tell you if the story you're writing has plot holes or is nonsensical, just as a regular coding syntax/spellchecker can't warn you if your code has bugs or is insecure.

wae
wae UltimaDork
3/29/25 11:37 a.m.

In reply to alfadriver :

Well, in this case the answer is sorta.  The tag that I used was a legitimate tag, but it was not the correct one.  And I was generating the HTML code programatically in another language (PHP) so as far as my editor was concerned, my code was fine.  I have had other logic errors in my code, however, that it has been able to find very quickly for me.  Nothing that I couldn't have found on my own eventually, but it sped up the process.  Basically, I can give it a snippet of code and ask "why doesn't this work", and it'll usually have a very correct answer.  I have had it write small snippets of code for me as well and it's done pretty well with that.  For example, I could tell it to write an SQL statement that will give me the game name and ticket count for the game specified with an inventory ID of x and it would spit out the query that I need.  It would already know what the column and table names are because it can infer that from the other questions I've asked and code snippets that I've shared.  These may all be things that someone who is very experienced at coding would not find helpful.  But I'm not so great at that, so I find it very helpful.

Another thing that it is really good at for coding is being able to explain it.  Again, from the perspective of someone who isn't an expert.  I've given it snippets of code and told it to walk me through the code, step by step, and explain what is happening.  And it does a great job with that request.

As far as what AI is doing that is different, it comes down to letting it make its own decisions about things.  For example, in the home automation example, it would be possible to learn over time that if the S&P was down by a quarter point in the day, you don't want the fireplace on.  Or if the weather was gloomy in the morning, you actually would prefer to watch Reacher on Amazon instead of racing.  Or any one of a million other correlations that a human brain wouldn't really be able to sit and walk through, but that it could observe and notice.  Basically, you're not telling it that you want the fireplace on and that you want the light on and all that.  It has just noticed that under these conditions, this is what usually happens so it will go ahead and do that for you.

wae
wae UltimaDork
3/29/25 11:39 a.m.

In reply to GameboyRMH :

I sure did.  I blame it on not enough coffee.  And I didn't use AI!

 

Fixed.

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/29/25 11:53 a.m.
alfadriver said:
Beer Baron 🍺 said:

... They are incredibly good at taking in a BUNCH of information and producing a narrow answer. This can be useful when it is being controlled by someone who actually understands these things and can interpret this data better. "AI" algorithms have proved invaluable for doing things like identifying cancer cells and such.

Just wanted to counter this part- I actually used the AI at work to look at a bunch of data to better understand the physics of what is going on, and it failed pretty badly.  It wasn't tasked to know the physics, but to find relationships in the data that we were missing- like what was really influencing good or bad air fuel control that we were missing.  The machine learning we were using could not even detect things that had a direct influence on it.  That's when I really started to question this machine learning and AI for what it could really do.  We did this for many months, with a lot of technical help from the group that was developing the tools- so we had a lot of effort put into this.

To counter your counter - I said AI can produce a NARROW answer... not necessarily a good one. Often it doesn't.

But overall, I think we're in complete agreement on what it can and can't do.

It sounds like you gave it a relatively novel problem, and AI tends to fall completely flat when that happens. It does not understand the data it is presented. It's good when people train it and it is then able to notice small patterns more quickly than them.

I've heard of a lot of good potential uses for it in aiding medical diagnoses because it's often better at identifying anomalies in scans like MRIs and such. They were talking on NPR about using it to help interpret mammograms to screen for breast cancer. That an AI can often notice things that a human misses and cut down on the rate of false positives and false negatives. But that we shouldn't rely on AI to determine and prescribe best courses of care.

The result seems to pretty much always be: AI is a tool that, when trained properly for a task, can speed up and improve accuracy of analyzing data when used by people who are already knowledgeable.

It breaks down as:

Knowledgeable Humans alone - slower but high quality work.
AI alone - fast, but tend to produce lots of errors
Knowledgeable humans using AI - fastest production of highest quality work.

alfadriver
alfadriver MegaDork
3/29/25 12:15 p.m.
wae said:

In reply to alfadriver :

 

As far as what AI is doing that is different, it comes down to letting it make its own decisions about things.  For example, in the home automation example, it would be possible to learn over time that if the S&P was down by a quarter point in the day, you don't want the fireplace on.  Or if the weather was gloomy in the morning, you actually would prefer to watch Reacher on Amazon instead of racing.  Or any one of a million other correlations that a human brain wouldn't really be able to sit and walk through, but that it could observe and notice.  Basically, you're not telling it that you want the fireplace on and that you want the light on and all that.  It has just noticed that under these conditions, this is what usually happens so it will go ahead and do that for you.

But to be able to know to do that, the system would have to see it at least once.  It can't just see the S&P result and automatically know to not use the fireplace, or that rain that is on a racing weekend results in seeing Reacher.  Even a human would need to observe that and connect the two to repeat it.   Let alone understand that the output is actually related to the input.  I personally think I'm too random of a person to see a single result and think I want that same outcome a second time (although, I know I'm not that random).

alfadriver
alfadriver MegaDork
3/29/25 12:20 p.m.
Beer Baron 🍺 said:
 

It breaks down as:

Knowledgeable Humans alone - slower but high quality work.
AI alone - fast, but tend to produce lots of errors
Knowledgeable humans using AI - fastest production of highest quality work.

That 100% makes sense.  And if given enough data, it *might* help figure out where a a/f control could be made better for better emissions.  Apple Intelegence isn't that, nor is ChatGPT.

So I still can't see a connection as a consumer.  Like the work I would do at home is wood working, gardening, making X-Mas cards, cleaning the house, changing the oil, fixing a car, etc.  Other than perhaps diagnosing a car problem faster, I struggle with AI helping me do any of what I do at home or out.

 

BTW- to all of you, sorry to be argumetative on this, but this is where I really don't understand the benefit of AI in my personal life.  While I didn't see it at work, I can at least ID things with enough teaching, which is helpful.

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
3/29/25 12:33 p.m.

I use it to summarize videos. Given the constrained input, it works pretty well. Great for generating a précis or for cutting down a padded-out YouTube video to just the actual facts so I don't have to spend 30 minutes to get 30 seconds of info. To me, that's very valuable.

I've also used it to spark ideas with images. Ask for a certain type of image and it will do something you wouldn't have done yourself, which can unlock some creativity by giving you another way to look at it. 

I can use it to generate product descriptions at work but the output is mind-numbingly generic. If I was selling feelings, it would work. Great for fashion and food.  But I'm selling technical products, so padding out the description with more words that contain no actual information is pointless.

Internet search? It's bad. Not trustworthy.  Solving math problems? As we saw in the Chinese EV thread, also not good. 

Of course it's not going to help with woodworking directly, that's physical work. It still lives purely in the digital realm.

Beer Baron 🍺
Beer Baron 🍺 MegaDork
3/29/25 1:15 p.m.
Keith Tanner said:

I can use it to generate product descriptions at work but the output is mind-numbingly generic. If I was selling feelings, it would work. Great for fashion and food.

It doesn't work selling fashion or food either. Mind-numbingly generic descriptions suck for those.

I write the overwhelming majority of product descriptions for our beers and liquors. These have to be very concise and evocative. This is really *really* hard to do. At least to do well.

I need to come up with a description that: accurately describes the flavors in a way that show how it is unique from similar products AND do so in an emotionally evocative way that sparks an emotional connection that makes you desire it. AI is completely incapable of that.

We played around with AI as a joke. It writes the most bland and generic beer, wine, and liquor descriptions you can imagine.

Duke
Duke MegaDork
3/29/25 2:13 p.m.

I used to write all the business proposals at work.  Putting together a solid proposal narrative and graphics package that was tailored to the potential client and project could be a week's worth of hard work.

One of the bosses, who is definitely the 10,000-foot big picture guy, told me to try using ChatGPT to streamline the process, particularly for projects we didn't have a ton of experience with.

As Beer Baron said, it came up with the most generic, repetitious pablum imaginable.  And since I've seen bucketsful of mistakes in AI-generated answers for topics where I do have some expertise, I was very dubious of the factual content in areas where I don't.  I never actually incorporated any of the output because it needed so much refinement and fact-checking.  It was faster and easier to just learn the points myself and compose the writing the old fashioned way.

However, I was pretty much the only one in the office who could put together a coherent, efficiently written paragraph.  Now that I'm not there, they may have better luck using it to tune up their writing.

 

Peabody
Peabody MegaDork
3/29/25 2:59 p.m.

A few weeks ago a customer tried to tell me that I was wrong about something and sent me a screen shot of an obviously AI generated answer to a specific question. I responded with a pic of me measuring the part in question with a vernier. In this case it was not helpful. 

Pete. (l33t FS)
Pete. (l33t FS) GRM+ Memberand MegaDork
3/29/25 4:37 p.m.

AI is GIGO and doesn't have the BS detector algorithms.

Like, say, disposing of batteries.  People know you can't throw them in the ocean.  Maybe there isn't a codified law that explicitly states "you can't throw batteries in the ocean" but a human would see existing littering and dumping laws and infer that it is illegal.  But if 400 social media users post that it's legal and acceptable to dispose of batteries by chucking them into the harbor, then the AI generated result will weigh in that favor.

 

Now, AI DOES have its uses.  Humans are pattern seeking devices but we are more geared to seeing faces on toast rather than analytical trends.  One useful use of AI that I liked was the examination of the two sets of code for a 3D print.  One printed in 1/3rd the time of an auto generated code.  The user asked the AI device if it could understand the code, then after an affirmative response, asked for an analysis of why one print would be faster than the other.  It was fascinating and not the sort of things humans can really do without a lot of time or being some sort of savant or both.

cyow5
cyow5 HalfDork
3/29/25 4:48 p.m.

I think the smart phone analogy is useful here. Plenty of people use their smart phones for very dumb things and would be better off with a flip phone. They'll argue to your face and then get mad when you pull out the phone and give a fact-based answer. They'd rather just use it for doomscrolling Facebook. Those people tend to also share very poorly AI-ed images of cakes on Facebook. Barbara...
 

Such is AI. Garbage In, Garbage Out was referenced above. At my work, AI is extremely useful for some projects. The aero guys don't have to run nearly as many sweeps to build a full aero model since, instead of interpolating between conditions, AI is well-suited for interpolating between data points in a large data set. I'm sure racing is using it similarly to determine wing angles and the like. 
 

One of the dangers of AI is IP leakage. If you want it to proofread your book, now it may recommend your plot to a competiting author. A defense firm had a design leak the same way. For us, there is an offline model that has already been trained

Pete. (l33t FS)
Pete. (l33t FS) GRM+ Memberand MegaDork
3/29/25 4:53 p.m.

​​​A long time ago, like 1986-1989, I read an article in Science Digest where they were working with a new concept, evolutionary algorithms.  The specific case was Diesel engine combustion.  They fed the combustion characteristics into the program and the program would alter things a little this way, a little that way, and then with the result that brought a positive result, in this case cleaner and quieter combustion, it would iterate from there. 

After some long series of iterations, the researchers found that the result was if they could have five or six micro-injections, efficiency went up, cleanliness went up, and combustion noise went down.  The new problem, the article concluded, was being able to actually do this in the real world.

Move forward a few decades, and electronically controlled common-rail Diesels do exactly this.

 

I wish I could FIND that article!

 

Keith Tanner
Keith Tanner GRM+ Memberand MegaDork
3/29/25 5:07 p.m.

In reply to Pete. (l33t FS) :

That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed. 

It predates the LLM craze so it's not called "AI", but it would be if it were introduced today. 

1 2 3 4

You'll need to log in to post.

Our Preferred Partners
PZInH9m2tdxpOpIO0pIStUA5IjCUwvXiDqKkUoS5pYIP80URiGLsKoM4TEIlCuFt