this post was submitted on 16 Oct 2025
45 points (100.0% liked)

technology

24072 readers
371 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

EagleEye, an AI-powered mixed-reality (MR) system designed to be built into soldiers’ helmets.

The modular hardware is a “family of systems,” according to Anduril’s announcement, including a heads-up display, spatial audio, and radio frequency detection. It can display mission briefings and orders, overlay maps and other information during combat, and control drones and military robotics.

“We don’t want to give service members a new tool—we’re giving them a new teammate,” says Luckey. “The idea of an AI partner embedded in your display has been imagined for decades. EagleEye is the first time it’s real.”

you are viewing a single comment's thread
view the rest of the comments
[–] Awoo@hexbear.net 1 points 2 weeks ago (2 children)

sensors for precise positioning

No you're overthinking it. The "sensor" already exists on the headset in the form of multiple cameras, which are set apart at specific distances allowing them to use multiple images from different locations to generate a 3d image. You can see this clearly on the current version of the headset (3 cameras in middle of helmet).

The older prototypes they were working with had more:

This can actually be done with just 2 cameras: https://youtu.be/5LWVtC4ZbK4

The technique for this is very simple depth measurement. I'm sure you understand that if you have a 3d image of everything in frame what you can do with that is pretty simple and is going to be accurate. You can probably assume that these are using wide fisheye lens like this so they have an extremely clear view of everything:

[–] HexReplyBot@hexbear.net 1 points 2 weeks ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[–] gayspacemarxist@hexbear.net 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

shrug-outta-hecks over thinking is how I live my life. I'm pretty skeptical in general, but maybe I'll take some time later to see if I can convince myself that something like this can work.

[–] Awoo@hexbear.net 1 points 2 weeks ago

All you really need are:

  1. Real time 3d model of what is currently being seen, achieved by multiple cameras.
  2. Real time 3d model of the rifle being aimed. With the ability to recognise where that rifle's barrel is pointing. This can be achieved with a laser on the rifle, or it can be achieved with simple image recognition that currently already exists to do things like recognise a hand pointing at something accurately used for mouse pointing and clicking inside VR. All of this will work fine as long as the rifle being aimed is in frame of the camera, which it will be on such a wide fov camera.