Wait, we have AI flying planes now?
News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
It took me a while to realize it is an Otto pilot...
I know you're joking, but for those who don't, the headline means "startups" and they just wanted to avoid the overused term.
Also, yeah actually it's far easier to have an AI fly a plane than a car. No obstacles, no sudden changes, no little kids running out from behind a cloud-bank, no traffic except during takeoff and landing, and those systems also can be automated more and more.
In fact, we don't need "AI" we've had autopilots that handle almost all aspects of flight for decades now. The FA-18 Hornet famously has hand-grips by the seat that the pilot is supposed to hold onto during takeoff so they don't accidentally touch a control.
Conversely, AI running ATC would be a very good thing. To a point.
It's been technically feasible for a while to handle 99% of what an ATC does automatically. The problem is that you really want a human to step in on those 1% of situations where things get complicated and really dangerous. Except, the human won't have their skills sharpened through constant use unless they're handling at least some of the regular traffic.
Trick has been to have the AI do, say, 70% of the job, but having a human step in sometimes. Deciding on when to have a human step in is the hard problem.
what do you think an autopilot is?
A finely refined model based on an actual understanding of physics and not a glorified Markov chain.
To be fair, that also falls under the blanket of AI. It’s just not an LLM.
No, it does not.
A deterministic, narrow algorithm that solves exactly one problem is not an AI. Otherwise Pythagoras would count as AI, or any other mathematical formula for that matter.
Intelligence, even in terms of AI, means being able to solve new problems. An autopilot can't do anything else than piloting a specific aircraft - and that's a good thing.
Not sure why you're getting downvoted. Well, I guess I do. AI marketing has ruined the meaning of the word to the extent that an if statement is "AI".
Because they are wrong. Airplane Autopilot is not "one model", it's a complex set of systems that take actions based on a trained model. The training of that model used standard ML practices. Sure, it's a base algorithm, but it follows the same principles. That's textbook AI.
No one would have debated this pre-LLM. That being said, if I was in the industry, I'd be calling it an algorithm instead of AI, because those out of the know, well, won't get it.
Mild height and bearing corrections.
That's terrifying, but I don't see why my regional train can't drive on AI in the middle of the night.
It's a bubble. This article is by someone realizing that who has yet to move their investments.
Yeah, 95% of AI companies either have no functional product or a chatGPT token account and a prompt.
Most of them could be replaced by a high school student and an N8N instance.
You mention N8N. Last week I had a sales VP mention it as well. Could you elaborate on your perspective? I've been building databases in BigQuery for the past month and will start utilizing ML for a business need so I probably missed some write up about it.
I'm a program manager, have some small coding experience.
N8n is like Legos of API access you can generate tons of integrations that would have otherwise been imposible with just a few hours of work. We have an issue where people don't complete their slack profiles. Using n8n I made an integration between our HR software and slack so that it automatically populates most fields without having to bug people.
And after that, it runs a check for what manual thing they are missing and sends them a message.
You put an http block, behind a filter block, behind a slack blog and it handles everything for you.
Would recommend you give it a try, I have it running on the work instance but I also have a local one running in my raspberry that I plan to use to fool around.
Have you talked to the average high school student these days? Not that the typical AI LLM response is much better, but I honestly feel sorry for the kids.
Probably highly subjective to Schools, States and Families. I'm around a lot of kids in GT and Engineering classes.
Most of them could be replaced by a high school student and an N8N instance.
Not really sure if the high school students have cheated their way out with ChatGPT.
At the moment, there are probably doing pretty good. Kind of like using calculators, when we got out of school we all had a calculator.
Were the rubber is going to meet the road It's when the AI bubble bursts and there's no over generous evaluations and free venture capital, and we actually need to pay a sustainable fee for the tokens.
They're going to need some really expensive calculators
Well, I didn’t think about it like that!
Hopefully there’s still students that use it as a mere tool rather than as a way to pass by without actually learning.
Hopefully anyone majoring in a subject is there because they want to learn the subject and we won't lose the capability. The people that aren't in it for the education won't fair as well when LLM's become the new rent :)
The real damage is the companies paying for the LLMs for their subpar, cheap employees, though. Those English majors are having a hard enough time finding work.
Yeah every single day the top 5 new products on ProductHunt are AI trash. It's wild what the bubble has become
Today:
Oh shit, I see elevenlabs on that list, They do tend to stir stuff up.
They used to have paid voice actors training imitations of real celebrities. You could do stuff like search out ship captain and get somebody knocking off Picard.
Looks like they released a music model trained on (paid) licensed material. Even their best sample stuff is kind of marginal, but it is real.
This sounds about right. Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such. Then like 25% more buy some companies ai solution and expect sorta the same from there. Then like 15% actually try to identify where ai could be helpful but don't really listen to feedback and just doggedly move forward. Eventually you get to the ones that identify where it might help and offer options to employees to use it much like any other software where they can request a license and let it grow and help organically and look more to just improve results or productivity.
Figure 50% are just screaming at their employees to use ai and at managers to lower headcount and make it up with ai and such.
Immediately imagined it being screamed in this voice:
"Use AI and make it lame!"
Feels very much like the push in the 90's for every company to have a website before companies understood what websites were for.
How'd that end up? Totally fine, right?
Completely agree.
I've got clients who I can see immediate benefits right now, and I've got clients where I don't think it's a good idea yet. Most of those that could benefit it's small tweaks to workflow processes to save a few FTE here and there, not these massive scale rollouts we're seeing.
Unfortunately Microsoft, along with other companies, are selling fully scale sexy to executive when full scale sexy isn't actually ready yet. What's available does work for some things, but it's hard to get an executive team to sign off on a project for testing to save only 10 employees worth of work in a 2000 person company when they're simultaneously a) worried about it going horribly wrong, and b) worried about falling behind other companies by not going fast enough.
Shocked that LLM wrapper slop that isn't deterministic only has limited use cases. Sam Altman is the biggest con artist of our time
He’s the second coming of Joseph Smith.
JS was a charismatic grifter by nature and upbringing who sold folks on the existence of a magic gold book that had extra-special info about American Jesus. He told them he found it after G*d told him where to dig.
This was just a few years after he had been hauled into court to face charges of running a ‘treasure hunting’ scheme on local farmers.
Now that I think about it more, the parallels are many.
In conclusion, shysters gonna shyst.
A few years ago we haf these stupid mandatory AI classes all about how AI could help you do your job better. It was supposed to be multiple parts but we never got passed the first one. I think they realized it wouldn't help most of the company but did leave our bespoke chatbot up for our customers/sales people. It is pretty good at helping with our products but I assume a lot of tuning has been done. I assume if we fed a local AI our data we could make it helpful but none of them have more than a basic knowledge of anything I do on a day to day basis.
Usually fit those chatbots you take a trained model and use RAG, essentially turning the question into a traditional search and asking the LLM to summarize the contents from the result. So it's frequently a convenient front end to a search engine, which is how it avoid s having to train to produce relevant responses. Is generally just prohibitively difficult in various ways to fine tune LLM through training and manage to get the desired behavior. So it can act like it "knows" about the stuff you do despite zero training if other methods are stuffing the prompts with the right answers.
Good. How do we fix the surviving 5%?