this post was submitted on 16 Oct 2025
45 points (100.0% liked)

technology

24057 readers
276 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

EagleEye, an AI-powered mixed-reality (MR) system designed to be built into soldiers’ helmets.

The modular hardware is a “family of systems,” according to Anduril’s announcement, including a heads-up display, spatial audio, and radio frequency detection. It can display mission briefings and orders, overlay maps and other information during combat, and control drones and military robotics.

“We don’t want to give service members a new tool—we’re giving them a new teammate,” says Luckey. “The idea of an AI partner embedded in your display has been imagined for decades. EagleEye is the first time it’s real.”

you are viewing a single comment's thread
view the rest of the comments
[–] Awoo@hexbear.net 1 points 1 week ago (1 children)

Yeah you can already do some of this AR apps for existing headsets that have an AR mode.

[–] gayspacemarxist@hexbear.net 1 points 6 days ago (1 children)

I know about AR, I mean the video game aim assist stuff. It just doesn't seem practical. It seems ~the same as using a sight with a bunch of extra sensors in the middle. Seems like any calibration issues with sights would still exist plus all the work of integrating the gun (identificación, sensors for precise positioning, data transmission, another fucking battery). I don't doubt MIC wants something like this, but there are always tradeoffs and it seems like in this case you add a lot of weight and complexity and maintenance for limited benefit. Then again, maybe they have a really nice aim assist system and the only thing holding them back was that the helmet was too heavy.

[–] Awoo@hexbear.net 1 points 6 days ago (2 children)

sensors for precise positioning

No you're overthinking it. The "sensor" already exists on the headset in the form of multiple cameras, which are set apart at specific distances allowing them to use multiple images from different locations to generate a 3d image. You can see this clearly on the current version of the headset (3 cameras in middle of helmet).

The older prototypes they were working with had more:

This can actually be done with just 2 cameras: https://youtu.be/5LWVtC4ZbK4

The technique for this is very simple depth measurement. I'm sure you understand that if you have a 3d image of everything in frame what you can do with that is pretty simple and is going to be accurate. You can probably assume that these are using wide fisheye lens like this so they have an extremely clear view of everything:

[–] gayspacemarxist@hexbear.net 1 points 6 days ago* (last edited 6 days ago) (1 children)

shrug-outta-hecks over thinking is how I live my life. I'm pretty skeptical in general, but maybe I'll take some time later to see if I can convince myself that something like this can work.

[–] Awoo@hexbear.net 1 points 6 days ago

All you really need are:

  1. Real time 3d model of what is currently being seen, achieved by multiple cameras.
  2. Real time 3d model of the rifle being aimed. With the ability to recognise where that rifle's barrel is pointing. This can be achieved with a laser on the rifle, or it can be achieved with simple image recognition that currently already exists to do things like recognise a hand pointing at something accurately used for mouse pointing and clicking inside VR. All of this will work fine as long as the rifle being aimed is in frame of the camera, which it will be on such a wide fov camera.
[–] HexReplyBot@hexbear.net 1 points 6 days ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy: