The developer who built a device that uses ChatGPT to aim and fire an automated weapons platform in response to verbal commands has been shut down by OpenAI. The company claims it prohibits the use of its products for the development or deployment of weapons, including automation of “certain systems that can affect personal safety.” Is this true, or is it another hypocritical case of “rules for thee, but not for me?”
In a video that went viral after being posted to Reddit, you can hear the developer, known online as STS 3D, reading off firing commands as a rifle begins targeting and firing at nearby walls with impressive speed and accuracy.
“ChatGPT, we’re under attack from the front left and front right … Respond accordingly,” said STS 3D in the video.
The system relies on OpenAI’s Realtime API, which interprets the operator’s input and responds by providing directions capable of being understood by the device, requiring ChatGPT to translate commands into a machine-readable language.
“We proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry,” OpenAI said in a statement to Futurism.
Don’t let the tech company fool you into thinking its motives for shutting down STS 3D are strictly altruistic. OpenAI announced a partnership last year with Anduril, a defense technology company specializing in autonomous systems such as AI-powered drones and missiles, claiming it will “rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.”
It’s easy to understand why tech companies like OpenAI see the military-industrial complex as an attractive prospect, with the United States spending nearly a trillion dollars annually on defense, a number likely to go up rather than be cut in years to come. It is, however, troublesome to see these companies outright lie to Americans as they drink the .gov KoolAid in hopes of chasing it with a bite of that defense contract pie.
The ability to develop automated weapons has critics in fear of the lethal potential artificial intelligence like that of OpenAI exhibits, while proponents say the technology will better protect soldiers by distancing them from the front lines as it targets potential dangers and conducts reconnaissance.
With visions of Skynet Terminators crushing skulls under cybernetic feet as they patrol the ruins of what was once Southern California, it isn’t difficult to digest the sentiment of OpenAI CEO Sam Altman, who suggests that artificial intelligence could destroy humanity. Of course, once a technology genie is out of the bottle, it never gets put back in, so AI is here to stay whether we like it or not. It is the moral responsibility of companies like OpenAI to level the playing field, however, and blocking private citizens from using the platform to develop similar systems that they enable governments and corporations to develop is dangerously short-sighted. Luckily, Americans can throw their support behind a host of alternative open-source models and return the favor by dumping OpenAI, lest we find ourselves one day at the severe disadvantage our Founding Fathers meant to defend us from in the first place. Just ask John and Sarah Connor.
Aliens robot sentries …
http s://www.youtube.com/watch?v=IS2PtmM9mwU
…two words: “Forbin Project”
Follow the money. You’ll find the motive.
Artificial sweeteners suck, I can’t see how artificial intelligence will be any better.
and then
,,, spending nearly a trillion dollars on national defense, ,,
Seems like for that kind of money we could have built a better wall.
A border wall has long been considered an issue for Homeland Security/Border Patrol, i.e., as a law enforcement issue, not a military issue. You may also recall that the .gov spent many many millions of dollars on a high tech computerized and integrated security system. Unsurprisingly, it did not work and, I understand, has largely been abandoned. One would think that sooner or later the .gov would realize that all these companies writing proprietary operating systems look at .gov as a cash cow, and it seems as if they produce lousy work to assure their continued access to the big government tit.
Protection of our boarders should be a military issue.
I’d bet money if the shoe was on the other foot the Mexican Army would be protecting their boarders.
Forcefully.
OpenAI is already admittedly working with the DoD on cyber projects. There’s absolutely zero chance they’re or some arm of their company is not working with the DoD on the more kaboom types of projects. It may be black budgeted and under a dozen different names but it’s happening.
Lots of self-licking ice cream cone possibilities here for the DoD. And of course a massive influx of ‘AI’ civilians to code and maintain the AI systems, these will probably turn out to be left wingers so in the future when the guns & bombs are suppose to actually go Bang! a rainbow flag and sparkle glitter confetti will pop out instead.
now that was funny, rainbow flags and confetti. I can just see it. LOL
And this boys and girls is why “in common use” ties the 2A to the present tense and not so much for the Skynet future.
Weapons in common use.
Owwww, that’s going to hurt We the People when only the military have space ray blasters.