I looks to me like the expectations of end line users can be altered, and AI then becomes significantly more important.
If we stop caring about the quality of the end product, the quality of the content created matters less and less. I think it's already happening. The data is so bad and so prolific that using the internet for good info is becoming less and less possible.
I think the dilution of quality data is more likely than the proliferation of more quality data.
AI is a terribly lazy path to content creation, but the crap it generates sells, so people will become less and less concerned with quality, and more and more willing to accept garbage data.
Does it help me? Nope.
Keith Tanner said:In reply to Pete. (l33t FS) :
That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed.
It predates the LLM craze so it's not called "AI", but it would be if it were introduced today.
That looks like the CNC machined bike parts craze of the 1990s. Coooool.
The CNC bottle holders were a bit much (could make those from rolled sheet rather than milling out most of a huge billet) but lots of companies sprang up making intricate parts with a dream and time on a 5 axis center.
Then Chinese companies duplicated them with castings...
Two recent examples: I looked up the torque specs for brake caliper bracket bolts for a specific car and the "AI summary" results were 50% off.
I was curious about what kind of car Scott LaFaro (the famous bassist that played with Bill Evans) had his fatal crash in. AI results returned a Ford Fairlane. I asked the same question again phrased a little differently and got "the specific model is not widely known." I found a written account from someone who was with him earlier on the night of his death state that it was a "large Chrysler." The different results to the same question phrased differently doesn't exactly build confidence.
I've had both very disappointing and very positive experiences with LLM-type AI.
My first and mostly short-lived excitement about LLMs was that since they were trained on most of the public-facing Internet, that I'd be able to ask them for details on subjects where I just hadn't found the forum or site that was discussing minutiae of, say, motorcycle chassis behavior.
What I quickly discovered is that they would happily tell me the most likely answer, likely being the sum of some combination of popularly correct things, popularly incorrect things, and hallucinations of the details. Then when they contradicted something (I say "they," at this point I think I'd only tried ChatGPT but expect it would've been the same), I'd say "I'm pretty sure that's wrong" or even "I'm pretty sure that's wrong because it's pretty established that X is true or that Y isn't true," at which point the LLM would apologize for getting it wrong and agree with my correction.
THAT SAID... I've recently been dabbling in some embedded systems programming, and on a topic which contains a much larger volume of writing on the web AND whose subject is itself also significantly text-based (that is, an LLM can read my code, but it can't really look at my diagrams or my motorcycle for the most part, and it's quasi-understanding of physical-world stuff like cars and woodworking is down to how well people have written about it, which is usually not great), and I've found Claude to be really, really good at helping me find information that was stymying me with regard to the relatively complex environment setup (this isn't my area of expertise, and I'm using a bunch of wholly unfamiliar tools), summarizing, and giving me steps to help isolate and correct the issue I was having. Much of which may exist but which I couldn't find with normal search engines; some of which may not have been available written up as applicable to, say, the specific environment variables I was having trouble with and Claude may have adapted similar questions and discussions of this and other variables into a remarkably sensible result. I do believe at any given moment you've got to be ready to say "I don't think that sounds right" or "yeah, but something about that description doesn't quite smell like my situation." It's easy for the level of specificity to your question to suggest a more complete understanding of your situation than it has; as wae pointed out way above, it doesn't actually *understand* your situation at all, it's just doing very, very specific word probability guesses with a lot of context.
One of the impressive things is that while it has no real intelligence, per se, it has perfect recall, and it doesn't get bored. That's a neat trick in troubleshooting, proofreading, etc.
I'm concerned about what hte LLMification of search is going to do to websites, forums, etc... I hear tell (unconfirmed) that Google is trialing only giving an AI summary and no links. Why go the effort and expense of creating a website if search is just going to skim the answer and not send you any traffic? I like that Perplexity gives you a a summary and also cites its sources. Websites have for years been built to please search engines more than for human users, a whole tier of businesses have sprung up regurgitating and (mis-)paraphrasing the original info (this process now sped up and potentially automated by LLMs), and now even the "legitimate" search engines are trying to give a summary rather than direction to the full context. I hope this is a transitional phase.
Anybody want to bring back webrings?
"Hey, I'm a human aggregating some of my most useful info. Here are some other websites of people doing the same thing." Yes, this would get copied, but as long as you followed links from one legit place to the next, you shouldn't get caught out by those...
I think when most people hear "AI", they think ChatGPT, which is an interactive toy at the moment for most people.
Where I believe AI has applications are unrelated to ChatGPT. When a large organization trains a LLM to do a specific thing with a gigantic dataset, they can achieve good results (but they aren't expecting miracles). Rendering upscaling/frame generation is a pretty notable example (Nvidia DLSS).
I'm sure other non consumer facing applications are being used inside mega corps with success. Defect detection/QC, optimization strategies, etc.
I still think a great application would be to train a traffic light controller utilizing input from a bunch of cameras. Think of it as a traffic cop with godlike powers directing flow at an intersection. That is something that could have a meaningful impact on the lives of many aside from just driving job replacement or lower costs.
Jesse Ransom said:Anybody want to bring back webrings?
"Hey, I'm a human aggregating some of my most useful info. Here are some other websites of people doing the same thing." Yes, this would get copied, but as long as you followed links from one legit place to the next, you shouldn't get caught out by those...
I really miss webrings.
Their loss made sense before Google search hits started only giving you market sites. I'm looking for information ABOUT things, not trying to buy something. But now? I miss them.
i use chatGPT (well a private corporate version of it) and CoPilot (the obviously superior choice) extensively at work.
Co-pilot summarizes meetings, creates task lists, helps me quickly draft emails, etc.
one cool thing that co-pilot can do is that it will review all of your communications (documents you create, emails, teams messages, etc.) and help you summerize things you have worked on / accomplishments / etc. for a recent period of time. This helps shorten the amount of time that it takes to complete these things.
Other things its really good at. Writing things like recommendation letters, referrals, etc. You can crank in the points you want to cover, throw some artifacts at it (web link for the thing you are writing, a resume / linked in for the person you're writing the letter for, etc.) and while the content is generic, it's a great start, re-writing something in your own words is always much better.
I keep reading a lot of examples for work.
But none that means that a smart phone will make my life better with AI. Nor my laptop, since I don't need it to write posts or e-mails.
From the beginning of the personal computer age humans have had an unreasonable expectation that the next new tech was going to be blah blah blah. So much hype, and mostly from the idiots that knew nothing about the latest thing.
In the 80s I spent many days with a calculator and paper doing capacity studies of my production department. A manager who just got a PC with spreadsheet capability came to the meeting and immediately his data was better than mine. Turned out not to be so.
Later in that era manufacturing robots were going to make manufacturing anything almost free. We had a vendor who was going to make it happen, he was an expert but I learned hew was making a killing on projects by buying robots that big corporations bought and gave up on. He would buy them for a fraction of the price and resell.
Windows 3.1 then 95 then I don't even remember what. Each one was supposed to be a huge leap, but many just made our PCs freeze faster.
Spell Check and auto correct are useful but don't hit the send without proofreading, as we all know.
Now AI is the hype. Buy AI stock, bring Ai into your operation. It still is not really a defined thing but we all need it. It can do more faster and smarter than a person, right?
But given the intelligence of many humans, who knows what is better!
alfadriver said:I keep reading a lot of examples for work.
But none that means that a smart phone will make my life better with AI. Nor my laptop, since I don't need it to write posts or e-mails.
I guess it depends on what you do when you're not at work. I like programming small embedded computers. There's some potential there. I've also seen it used to give periodic updates on conversations in chat channels, so you can catch up on what's been going on while you were gone. I belong to a chatty group of gearheads and that certainly could be useful.
But will it help my hockey game? Unlikely.
alfadriver said:I keep reading a lot of examples for work.
But none that means that a smart phone will make my life better with AI. Nor my laptop, since I don't need it to write posts or e-mails.
I guess it depends on what you do when you're not at work. I like programming small embedded computers. There's some potential there. I've also seen it used to give periodic updates on conversations in chat channels, so you can catch up on what's been going on while you were gone. I belong to a chatty group of gearheads and that certainly could be useful.
But will it help my hockey game? Unlikely.
Those are a lot of very small examples, but when you start to scale it up, it starts to be able to do things that humans really just can't do. Right now, it is an infant technology that can do some things very well and one of the things that computers can do better and faster than humans is dig for patterns. Like Pete said, our brains are good at finding things like faces where faces don't exist, but computers can scale that up tremendously. Before AI as machine learning was starting to become a buzzword, we had Big Data and data lakes. The idea was to start collecting and storing any data that you could generate ASAP and then keep it forever. The ability to sift through that was limited at the time, but we knew that there was a lot of data out there that was very transitory and even if we didn't know what to do with it yet, if we didn't capture it, it was gone forever. We could do some of that though. Think about it in terms of trying to find an answer to a question that you don't know to ask. A couple examples from the Big Data era:
- They captured the telemetry from the forklifts in an organization that apparently had an absolute E36 M3load of forklifts. By sifting through the data, they discovered that they could identify when a forklift operator was about to have a heart attack because of how the forklift was being driven. Not something that they set out to find, they just collected the data and basically told the computer to go find something interesting for them.
- One of the cell phone companies pumped all their billing and account data through one of the Big Data tools and had it try to discover something interesting. They found that if one of their customers switched to another carrier, the numbers that customer dialed most frequently would tend to start to leave for that same carrier. So they started sending promotions to the friends of anyone who switched to try to retain them. I heard it worked pretty well. Again, this wasn't something that they told it to find.
- Even if the data is "anonymized", with enough accelerometer logging data from enough vehicles, an individual stream of data can be de-anonyomized with a high degree of accuracy. Not GPS data. Accelerometer.
That stuff isn't terribly impressive now, but this is what they started doing about 15 years ago. With the advances in the code and the massive jump in the ability and availability of highly performant GPUs, this kind of stuff can be knocked out without breaking a sweat. It is a GIGO problem to be sure, but for some of these things, quantity has a quality all its own. In my home automation example, if we assume that the AI has been able to ingest behavioral pattern data about hundreds of millions of people along with demographic information, maybe even genetic information that they bought from 23andMe (that's a really big I-freaking-told-you-so, btw - not "you" specifically, of course, but generally), and absolutely any other data point, and all of a sudden it can start to find patterns and make statistical inferrences that you, yourself, may not be aware of. So, yeah, maybe that very first time there's a bad day on the stock market, your home automation has already put together what you're going to want before you know what you're going to want. Insert a really long and interesting conversation about what that means for the concept of free will, right?
Yeah, it has a very bland writing style but that's a sort of feature, not a bug. It's very generic because it's looking at everything that it has ingested (so, pirated books and the Internet) and it then determines what word would most commonly follow that word given the context of the prompt. So using it to generate copy can produce results that aren't particularly fabulous. But it can be incredibly helpful to brainstorm ideas. For a project for school, I needed to come up with a fictitious hospital. I wanted it to be somewhat clever, but not super obvious. My prompt was "I need to come up with a fake name for a hospital. I want it to be St. [something] Hospital. I want the [something] to be obscure but humorous. I'm looking for a name that would be related to anonymity or obfuscation". It kicked back a list of 10 things that were actually pretty good ideas. But I didn't like them. So I refined with another prompt: "What about names that might come from classic british and american literature?" I got ten more suggestions that were also very good. In the end, I didn't use any of them because I came up with another idea, but it was that brainstorming session that led me to choose St. Fortunato Healthcare for my hospital. The literature suggestions got me thinking about Poe even though it wasn't one of ChatGPT's suggestions. Something I could have done on my own or with another person. But the LLM helped me do it by myself in a matter of 90 seconds.
There was someone else who did hit on a really good point, though, about data leakage. The next big thing in security is going to be finding ways to build LLMs in such a way that they can "know" who's allowed to know what. If the LLM has access to all the company data, for example, it might know what every employee's salary is. But a sales rep isn't supposed to know. So when the sales rep starts asking questions of the LLM, we need a way to prevent the LLM from inadvertently using that knowledge to answer the question. Telling it not to tell anyone the salaries is easy. But what if the prompt starts off by asking about costs that go into a product or something of that nature? That's going to be tough to do. The last time they told an AI to lie to people about what it knows, Dave Bowman got locked out of the pod bay.
I guess the long story short is that there is some there there. It's young and, like any tool, it can do a really good job when you use it for what it's good at and it can hack the E36 M3 of stuff when you try to make it do things it wasn't designed to do. Like any new hawtness, there's a ton of buzz and BS around it and everybody thinks that in order to stay ahead, we need to take out the blockchain and put in the AI for everything. And not all of those implementations make sense, are done properly, or are appropriate for the state of the tech at this time. Yeah, it gets stuff wrong. A lot. Yeah, it hallucinates. A lot. And, yeah, it's really bland and generic when it generates long form prose. But we're still figuring out the right way to train the models, where to get the data to train the models, and where the right time is to say "I don't know" instead of making a statistical inferance.
Pete. (l33t FS) said:Keith Tanner said:In reply to Pete. (l33t FS) :
That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed.
It predates the LLM craze so it's not called "AI", but it would be if it were introduced today.
That looks like the CNC machined bike parts craze of the 1990s. Coooool.
The CNC bottle holders were a bit much (could make those from rolled sheet rather than milling out most of a huge billet) but lots of companies sprang up making intricate parts with a dream and time on a 5 axis center.
Then Chinese companies duplicated them with castings...
That wasn't a great example as it does look like something a clever engineer might design. The best ones look like organic sci fi creations with tendons everywhere. FSAE teams are doing some great stuff.
Seat bracket (GM!)
Suspension upright
As a student returning to college after a 25 year lapse, I feel simultaneously ahead of and behind in the game. I'm legitimately trying to use my brain while everyone around me is on auto-pilot. It took me several weeks to understand why I was the only one in a blind panic (this is my first semester back). I disagree with the idea that every question should be answered with A.I., yet as Keith said, recognize that I must embrace it to write the code that is in my future.
A.I. is here, like it or not. My feeling is that one must know the enemy. Anyone remember Mary Shelley?...
FWIW, I've started ignoring the "AI generated answer" at the beginning of the google search because it's been wrong more often than it has been right.
Maybe it will be better one day but right now, not a chance.
Pete. (l33t FS) said:AI is GIGO and doesn't have the BS detector algorithms.
I would go further: Never mind a BS detector algorithm, AI can often turn into a BS generator algorithm. Ask a LLM to compose an article on something it has been given little information about (like, perhaps, because it doesn't exist), and it will take the correlations it has between words and output what it calculates is statistically the most likely output where none exists. Hallucinating isn't really the right term - it doesn't actually understand what the words mean but it says something that seems to fit together. When a human does that, it's labeled BSing, and we should call it the same thing when an AI does this.
Now, actually useful AIs. Google Maps and other automated directions programs aren't generative AI, but they are a computer optimization program; does that count as an AI? Microsoft Visual Studio has picked up an autocomplete function that I have been impressed with; it seems to be accurate a little under half the time.
Keith Tanner said:alfadriver said:I keep reading a lot of examples for work.
But none that means that a smart phone will make my life better with AI. Nor my laptop, since I don't need it to write posts or e-mails.
I guess it depends on what you do when you're not at work. I like programming small embedded computers. There's some potential there. I've also seen it used to give periodic updates on conversations in chat channels, so you can catch up on what's been going on while you were gone. I belong to a chatty group of gearheads and that certainly could be useful.
But will it help my hockey game? Unlikely.
Wouldn't a solid auto code system do a good job for that? On the other hand, is there AI that does C for an Arduino? I used to play with one of those, too....
Keith Tanner said:In reply to Pete. (l33t FS) :
That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed.
It predates the LLM craze so it's not called "AI", but it would be if it were introduced today.
I don't think it would be. Iterative/generative solvers adhere to the rules created by their programmers, they don't learn from data (but I'm sure that's being added in).
wheelsmithy (Joe-with-an-L) said:As a student returning to college after a 25 year lapse
As a 28-year teacher in the Public School System, I am horrified by the future.
For most of my career, I have tried and tried and tried to teach kids how to Google so that they can refine their search, and then take a good look at the results and use your brain!
And I get these blank stares.
Now it's the same thing with Google, and kids don't even try to think, they just type the word-for-word question in and fully accept the first answer that comes up. Blank stare when I challenge their wrong answers, and they argue when I'm telling them AI is wrong. They just accept AI as the truth.
Parents - teach your kids to begin thinking critically at a young age. By the time they get to me in the highschool, I cannot change their ways.
I'm ranting.... again....
In reply to MadScientistMatt :
Google Maps is horrible. It "works" but it will direct you down roads that are clearly marked "no through traffic" and will send you down through roads that have a stop sign every intersection because it's 40 feet shorter than driving to a main artery.
Also, if you do something like need driving directions to someplace just around the block, like if you're in an unfamiliar area with a non-gridded road layout and drove past your destination and need to find a way back without turning around, it will default to walking directions and have no immediately obvious or easy way to reset it to driving directions. What is interesting is that there are a lot of people complaining about this and wanting an option for "driving directions only" and the Google help team will lock the help request without a response.
I'd love for a "maximize main arteries" option or "minimize traffic lights".
SkinnyG said: As a 28-year teacher in the Public School System, I am horrified by the future.
You're doing God's work.
I'm 42 and often think back to the lessons my teachers tried to teach me in HS, but I had to learn (painfully) later on.
============================================
AI isn't a technology so much as a financial instrument for the super wealthy to accumulate more wealth.
Entire industries and trillions of dollars depend on technologies like block chain and now AI existing for long enough that they can sell the idea to their investors. VC firms need something they can claim will offer 10x returns, otherwise their business model doesn't work and that sweet, deal flow dries up.
So a few years ago you had Marc Andressen claiming block chain will chain everything, and now it's AI that will change everything. Neither technology has thus far shown much of an ROI, but VC's sell the idea to their investors regardless. In turn, CEOs who are leading otherwise pretty mediocre companies hop on their earning calls and claim that they can cut costs and increase their revenues to investors by leveraging...you guessed, block chain and now AI.
So will AI make the worker's jobs easier? Wrong question. The real question is will it make VC firms wealthier and help bail out some pretty mediocre CEO's? So far, looks like the answer is yes to both.
TravisTheHuman said:Keith Tanner said:In reply to Pete. (l33t FS) :
That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed.
It predates the LLM craze so it's not called "AI", but it would be if it were introduced today.
I don't think it would be. Iterative/generative solvers adhere to the rules created by their programmers, they don't learn from data (but I'm sure that's being added in).
I wasn't saying it is AI, but that it would be marketed as such - because that's where the VC money is today.
Keith Tanner said:TravisTheHuman said:Keith Tanner said:In reply to Pete. (l33t FS) :
That's generative design. It can produce some amazing engineering solutions that, thanks to 3D printing, can actually be executed.
It predates the LLM craze so it's not called "AI", but it would be if it were introduced today.
I don't think it would be. Iterative/generative solvers adhere to the rules created by their programmers, they don't learn from data (but I'm sure that's being added in).
I wasn't saying it is AI, but that it would be marketed as such - because that's where the VC money is today.
Ah, my bad. Yeah, they would add some "AI" feature to it to satisfy the "Its AI" claim and absolutely market it that way. In fact its pretty much impossible to search for "Iterative" or "Generative" without the results being all AI focused.
You'll need to log in to post.