Technology

40548 readers
122 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
1
 
 

Hey Beeple and visitors to Beehaw: I think we need to have a discussion about !technology@beehaw.org, community culture, and moderation. First, some of the reasons that I think we need to have this conversation.

  1. Technology got big fast and has stayed Beehaw's most active community.
  2. Technology gets more reports (about double in the last month by a rough hand count) than the next highest community that I moderate (Politics, and this is during election season in a month that involved a disastrous debate, an assassination attempt on a candidate, and a major party's presumptive nominee dropping out of the race)
  3. For a long time, I and other mods have felt that Technology at times isn’t living up to the Beehaw ethos. More often than I like I see comments in this community where users are being abusive or insulting toward one another, often without any provocation other than the perception that the other user’s opinion is wrong.

Because of these reasons, we have decided that we may need to be a little more hands-on with our moderation of Technology. Here’s what that might mean:

  1. Mods will be more actively removing comments that are unkind or abusive, that involve personal attacks, or that just have really bad vibes.
    a. We will always try to be fair, but you may not always agree with our moderation decisions. Please try to respect those decisions anyway. We will generally try to moderate in a way that is a) proportional, and b) gradual.
    b. We are more likely to respond to particularly bad behavior from off-instance users with pre-emptive bans. This is not because off-instance users are worse, or less valuable, but simply that we aren't able to vet users from other instances and don't interact with them with the same frequency, and other instances may have less strict sign-up policies than Beehaw, making it more difficult to play whack-a-mole.
  2. We will need you to report early and often. The drawbacks of getting reports for something that doesn't require our intervention are outweighed by the benefits of us being able to get to a situation before it spirals out of control. By all means, if you’re not sure if something has risen to the level of violating our rule, say so in the report reason, but I'd personally rather get reports early than late, when a thread has spiraled into an all out flamewar.
    a. That said, please don't report people for being wrong, unless they are doing so in a way that is actually dangerous to others. It would be better for you to kindly disagree with them in a nice comment.
    b. Please, feel free to try and de-escalate arguments and remind one another of the humanity of the people behind the usernames. Remember to Be(e) Nice even when disagreeing with one another. Yes, even Windows users.
  3. We will try to be more proactive in stepping in when arguments are happening and trying to remind folks to Be(e) Nice.
    a. This isn't always possible. Mods are all volunteers with jobs and lives, and things often get out of hand before we are aware of the problem due to the size of the community and mod team.
    b. This isn't always helpful, but we try to make these kinds of gentle reminders our first resort when we get to things early enough. It’s also usually useful in gauging whether someone is a good fit for Beehaw. If someone responds with abuse to a gentle nudge about their behavior, it’s generally a good indication that they either aren’t aware of or don’t care about the type of community we are trying to maintain.

I know our philosophy posts can be long and sometimes a little meandering (personally that's why I love them) but do take the time to read them if you haven't. If you can't/won't or just need a reminder, though, I'll try to distill the parts that I think are most salient to this particular post:

  1. Be(e) nice. By nice, we don't mean merely being polite, or in the surface-level "oh bless your heart" kind of way; we mean be kind.
  2. Remember the human. The users that you interact with on Beehaw (and most likely other parts of the internet) are people, and people should be treated kindly and in good-faith whenever possible.
  3. Assume good faith. Whenever possible, and until demonstrated otherwise, assume that users don't have a secret, evil agenda. If you think they might be saying or implying something you think is bad, ask them to clarify (kindly) and give them a chance to explain. Most likely, they've communicated themselves poorly, or you've misunderstood. After all of that, it's possible that you may disagree with them still, but we can disagree about Technology and still give one another the respect due to other humans.
2
3
4
5
 
 

New documents and court records obtained by EFF show that Texas deputies queried Flock Safety's surveillance data in an abortion investigation, contradicting the narrative promoted by the company and the Johnson County Sheriff that she was “being searched for as a missing person,” and that “it was about her safety.”

The new information shows that deputies had initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman’s self-managed abortion, and consulted prosecutors about possibly charging her.

Johnson County Sheriff Adam King repeatedly denied the automated license plate reader (ALPR) search was related to enforcing Texas's abortion ban, and Flock Safety called media accounts "false," "misleading" and "clickbait." However, according to a sworn affidavit by the lead detective, the case was in fact a death investigation in response to a report of an abortion, and deputies collected documentation of the abortion from the "reporting person," her alleged romantic partner. The death investigation remained open for weeks, with detectives interviewing the woman and reviewing her text messages about the abortion.

The documents show that the Johnson County District Attorney's Office informed deputies that "the State could not statutorily charge [her] for taking the pill to cause the abortion or miscarriage of the non-viable fetus."

You tax dollars at work. Let's spend a month investigating something the DA immediately shuts down.

But, hey: Good on the DA.

6
 
 

Unfortunately, the technology of the future demands a high price. On top of the exorbitant energy cost fueling a return to industrial-era levels of pollution, AI is also propped up by a massive global sweatshop operation, where low-wage workers in underdeveloped countries are tasked with doing the hidden intellectual labor that makes the tech useful.

As reported by Agence France-Presse, workers in long-exploited countries like Kenya, Colombia, and India are becoming increasingly outraged over the miserable labor of AI training. For example, as the wire service notes, for an AI chatbot to generate an autopsy report, contract workers have to sift through thousands of gruesome crime scene images, a gig known as “data labeling.”

Though the work is often done remotely — thus saving on the overhead costs of leasing an office — data labeling isn’t exactly a cushy laptop job. Workers involved in this industrial operation describe grueling hours, few if any workplace protections, and frequent tasks involving violent or grisly content. In theory, it’s not unlike social media content moderation, another digital practice built on exploitative labor in the developing world.

“You have to spend your whole day looking at dead bodies and crime scenes,” Ephantus Kanyugi, a Kenyan data label, told AFP. “Mental health support was not provided.”

I could swear I've seen this movie before.

7
 
 

Like many researchers, Gerlich believes that, used in the right way, AI can make us cleverer and more creative – but the way most people use it produces bland, unimaginative, factually questionable work. One concern is the so-called “anchoring effect”. If you post a question to generative AI, the answer it gives you sets your brain on a certain mental path and makes you less likely to consider alternative approaches. “I always use the example: imagine a candle. Now, AI can help you improve the candle. It will be the brightest ever, burn the longest, be very cheap and amazing looking, but it will never develop to the lightbulb,” he says.

To get from the candle to a lightbulb you need a human who is good at critical thinking, someone who might take a chaotic, unstructured, unpredictable approach to problem solving. When, as has happened in many workplaces, companies roll out tools such as the chatbot Copilot without offering decent AI training, they risk producing teams of passable candle-makers in a world that demands high-efficiency lightbulbs.

There is also the bigger issue that adults who use AI as a shortcut have at least benefited from going through the education system in the years before it was possible to get a computer to write your homework for you. One recent British survey found that 92% of university students use AI, and about 20% have used AI to write all or part of an assignment for them.

Under these circumstances, how much are they learning? Are schools and universities still equipped to produce creative, original thinkers who will build better, more intelligent societies – or is the education system going to churn out mindless, gullible, AI essay-writing drones?

8
 
 

The European Commission has revised the Ecodesign requirements for external power supplies (EPS). The new rules aim to increase consumer convenience, resource efficiency, and energy efficiency. Manufacturers have three years to prepare for the changes.

The new regulations apply to external power supplies that charge or power devices such as laptops, smartphones, Wi-Fi routers, and computer monitors. Starting in 2028, these products must meet higher energy efficiency standards and become more interoperable. Specifically, USB chargers on the EU market must have at least one USB Type-C port and function with detachable cables.

With the regulation, the EU is also establishing minimum requirements for the efficiency of power supplies with an output power of up to 240 watts that charge via USB Power Delivery (USB-PD), among other things, under other things, minimum requirements. Power supplies with an output power exceeding 10 watts will also have to meet minimum energy efficiency values in partial load operation (10 percent of rated power) in the future, which is intended to reduce unnecessary energy losses.

Interestingly,

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.

I was rather surprised to see a Heise story linked off Slashdot, but this is a great use of technology to broaden the audience. Way less friction than every user needing a browser plugin for translation.

Historically, the charger situation has been, well, a shitshow. I've had more than a few devices over the years where the proprietary cable died, and getting a replacement was so cost prohibitive that just buying a replacement device that came with one was the more logical move.

USB-PD kinda seems like the silver bullet here. Though I'm not sure that's true unless further regulations require all USB-C cables to be able to provide 240W, and even then, there will be a long tail on older cables still being in use.

But this could be game changing in like a decade. Imagine charging with any cable and any DC converter. We'll look back on this mess as though different devices required proprietary wall outlets!

Meanwhile, here in the states, regulators are much more concerned about inflatable frog suits than any lurch toward sustainability.

9
 
 

Shortly after Colombian presidential candidate Miguel Uribe Turbay was shot at a political rally in June, hundreds of videos of the attack flooded social media. Some of these turned out to be deepfakes made with artificial intelligence, forcing police and prosecutors to spend hours checking and debunking them during the investigation. A teenager was eventually charged.

Increasing adoption of AI is transforming Latin America’s justice system by helping tackle case backlogs and improve access to justice for victims. But it is also exposing deep vulnerabilities through its rampant misuse, bias, and weak oversight as regulators struggle to keep up with the pace of innovation.

Law enforcement doesn’t yet “have the capacity to look at these judicial matters beyond just asking whether a piece of evidence is real or not,” Lucia Camacho, public policy coordinator of Derechos Digitales, a digital rights group, told Rest of World. This may prevent victims from accessing robust legal frameworks and judges with knowledge of the technology, she said.

Justice systems across the world are struggling to address harms from deepfakes that are increasingly used for financial scams, in elections, and to spread nonconsensual sexual imagery. There are currently over 1,300 initiatives in 80 countries and international organizations to regulate AI, but not all of these are laws and nor do they all cover deepfakes, according to the Organisation for Economic Co-operation and Development.

10
 
 

I'm not going to pretend I know all the electrical shit going on here, but this is overall pretty badass. It's always cool when YouTube actually suggests something interesting.

11
 
 

Bad news, baby. The New Yorker reports the rapid advance of AI in the workplace will create a “permanent underclass” of everyone not already hitched to the AI train.

The prediction comes from OpenAI employee Leopold Aschenbrenner, who claims AI will “reach or exceed human capacity” by 2027. Once it develops capacity to innovate, AI superintelligence will supersede even a need for its own programmers … and then wipe out the jobs done by everyone else.

Nate Soares, winner of “most sunshine in book title” and co-author of AI critique If Anyone Builds It, Everyone Dies suggests “people should not be banking on work in the long term”. Math tutors, cinematographers, brand strategists and journalists are quoted by the New Yorker, freaking out.

The consolation here is that if you are among those panicking about being forced into the permanent underclass, you are already in it. Inherited wealth makes more billionaires than entrepreneurship, the opportunity gap is growing; if your family don’t have the readies to fund your tech startup, media empire or eventual presidential ambitions, it’s probably because they were in a tech-displaced underclass, too.

12
 
 

Aside from a MAGA hat, there is likely no object that feels more emblematic of US president Donald Trump’s return to the White House than the Tesla Cybertruck. The blunt angles and steel doors look futuristic, for sure, but only if the future looks a lot like RoboCop. To some, it’s a metallic status symbol. To others, it’s fascism on wheels. Either way, heads turn.

Cybertruck owners see things differently. “To me, it's just a vehicle that I love,” says Andrew Castillo, a stock trader from Los Angeles. “It has no political affiliations at all to me.”

We’re standing in the parking lot of McCormick's Palm Springs Classic Car Auctions. All around us, a dozen Cybertruck owners—and their cars—bake in the 100 degree heat. They’ve arrived for a meetup organized by Michael Goldman, who runs the 53,000-person Facebook group Cybertruck Owners Only. Though suspicious of the media, they’re eager to set the record straight about the car that they love. WIRED is here to learn how it feels to be out in public in such a politically charged vehicle. Has the past year or so changed anyone’s minds about owning the truck? Do owners like the attention—or are they adding bumper stickers decrying Elon Musk?

As we’re talking, a woman drives by in a small sedan. “Your cars are fucking ugly!” she screams before peeling off. Castillo smiles. “Some people just aren’t playing with a full deck of cards,” he says serenely.

13
 
 

Admittedly, this is 80%-90% rehash. If you're familiar with Doctorow, you've heard most of it.

Other than his gaffe reversing the players in the proposed Skydance-WBD merger, it a decent interview.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.'”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

14
 
 

Invidious: https://inv.nadeko.net/watch?v=VOORiyip4_c

YouTube: https://youtu.be/VOORiyip4_c

The video talks about a new paper in a techniqe to eliminate clipping of vectors. The only problem is, it is extremely computational expensive. I compare this to RayTracing, which will be viable in the future only if all the tools implement it. I assume the hardware chips that support RayTracing could be used for this new technology too, but that is just my personal assumption here.

I left the original title of the video, as it would be editorial otherwise.

Video description (only relevant parts):


📝Paper: drive.google.com/file/d/1OrOKJH_im1L4j1cJB18sfvNHEbZVSqjL/view Code and examples are available here: github.com/st-tech/ppf-contact-solver Guide on how to try it: drive.google.com/file/d/1n068Ai_hlfgapf2xkAutOHo3PkLpJXA4/view

Sources: youtube.com/watch?v=5GDIoshj9Rw youtube.com/watch?v=X53VuYLP0VY youtube.com/shorts/x0WjJgotCXU youtube.com/watch?v=Qu4Of18Kf2M

📝 My paper on simulations that look almost like reality is available for free here: rdcu.be/cWPfD

Or this is the orig. Nature Physics link with clickable citations: nature.com/articles/s41567-022-01788-5

15
 
 

I always love Backblaze's analysis. It's how I learned I wasn't special for having those notorious 4TB Seagate drives (I want to say DM003) shit the bed in short order.

Increased longevity is undoubtedly a plus; however, whether doing a RAID rebuild or just being out a 24TB drive (hopefully, you've got a backup), that's a lot of time and effort to get back to square one.

If you’ve hung around Backblaze for a while (and especially if you’re a Drive Stats fan), you may have heard us talking about the bathtub curve. In Drive Failure Over Time: The Bathtub Curve Is Leaking, we challenged one of reliability engineering’s oldest ideas—the notion that drive failures trace a predictable U-shaped curve over time.

But, the data didn’t agree. Our fleet showed dips, spikes, and plateaus that refused to behave. Now, after 13 years of continuous data, the picture is clearer—and stranger.

The bathtub curve isn’t just leaking, and the shape of reliability might look more like an ankle-high wall at the entrance to a walk-in shower. The neat story of early failures, calm middle age, and gentle decline no longer fits the world our drives inhabit. Drives are getting better—or, more precisely, the Drive Stats dataset says that our drives are performing better in data center environments.

So, let’s talk about what our current “bathtub curve” looks like, and how it compares to earlier generations of the analysis.

The TL;DR: Hard drives are getting better, and lasting longer.

16
17
18
 
 

cross-posted from: https://lemmy.world/post/37439462

cross-posted from: https://lemmy.world/post/37439450

S.B. No. 2420

AN ACT relating to the regulation of platforms for the sale and distribution of software applications for mobile devices. BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS: SECTION 1. Subtitle C, Title 5, Business & Commerce Code, is amended by adding Chapter 121 to read as follows: CHAPTER 121. SOFTWARE APPLICATIONS SUBCHAPTER A. GENERAL PROVISIONS Sec. 121.001. SHORT TITLE. This chapter may be cited as the App Store Accountability Act. Sec. 121.002. DEFINITIONS. In this chapter: (1) "Age category" means information collected by the owner of an app store to designate a user based on the age categories described by Section 121.021(b). (2) "App store" means a publicly available Internet website, software application, or other electronic service that distributes software applications from the owner or developer of a software application to the user of a mobile device. (3) "Minor" means a child who is younger than 18 years of age who has not had the disabilities of minority removed for general purposes. (4) "Mobile device" means a portable, wireless electronic device, including a tablet or smartphone, capable of transmitting, receiving, processing, and storing information wirelessly that runs an operating system designed to manage hardware resources and perform common services for software applications on handheld electronic devices. (5) "Personal data" means any information, including sensitive data, that is linked or reasonably linkable to an identified or identifiable individual. The term includes pseudonymous data when the data is used by a person who processes or determines the purpose and means of processing the data in conjunction with additional information that reasonably links the data to an identified or identifiable individual. The term does not include deidentified data or publicly available information. SUBCHAPTER B. DUTIES OF APP STORES Sec. 121.021. DUTY TO VERIFY AGE OF USER; AGE CATEGORIES. (a) When an individual in this state creates an account with an app store, the owner of the app store shall use a commercially reasonable method of verification to verify the individual's age category under Subsection (b). (b) The owner of an app store shall use the following age categories for assigning a designation: (1) an individual who is younger than 13 years of age is considered a "child"; (2) an individual who is at least 13 years of age but younger than 16 years of age is considered a "younger teenager"; (3) an individual who is at least 16 years of age but younger than 18 years of age is considered an "older teenager"; and (4) an individual who is at least 18 years of age is considered an "adult." Sec. 121.022. PARENTAL CONSENT REQUIRED. (a) If the owner of the app store determines under Section 121.021 that an individual is a minor who belongs to an age category that is not "adult," the owner shall require that the minor's account be affiliated with a parent account belonging to the minor's parent or guardian. (b) For an account to be affiliated with a minor's account as a parent account, the owner of an app store must use a commercially reasonable method to verify that the account belongs to an individual who: (1) the owner of the app store has verified belongs to the age category of "adult" under Section 121.021; and (2) has legal authority to make a decision on behalf of the minor with whose account the individual is seeking affiliation. (c) A parent account may be affiliated with multiple minors' accounts. (d) Except as provided by this section, the owner of an app store must obtain consent from the minor's parent or guardian through the parent account affiliated with the minor's account before allowing the minor to: (1) download a software application; (2) purchase a software application; or (3) make a purchase in or using a software application. (e) The owner of an app store must: (1) obtain consent for each individual download or purchase sought by the minor; and (2) notify the developer of each applicable software application if a minor's parent or guardian revokes consent through a parent account. (f) To obtain consent from a minor's parent or guardian under Subsection (d), the owner of an app store may use any reasonable means to: (1) disclose to the parent or guardian: (A) the specific software application or purchase for which consent is sought; (B) the rating under Section 121.052 assigned to the software application or purchase; (C) the specific content or other elements that led to the rating assigned under Section 121.052; (D) the nature of any collection, use, or distribution of personal data that would occur because of the software application or purchase; and (E) any measures taken by the developer of the software application or purchase to protect the personal data of users; (2) give the parent or guardian a clear choice to give or withhold consent for the download or purchase; and (3) ensure that the consent is given: (A) by the parent or guardian; and (B) through the account affiliated with a minor's account under Subsection (a). (g) If a software developer provides the owner of an app store with notice of a change under Section 121.053, the owner of the app store shall: (1) notify any individual who has given consent under this section for a minor's use or purchase relating to a previous version of the changed software application; and (2) obtain consent from the individual for the minor's continued use or purchase of the software application. (h) The owner of an app store is not required to obtain consent from a minor's parent or guardian for: (1) the download of a software application that: (A) provides a user with direct access to emergency services, including: (i) 9-1-1 emergency services; (ii) a crisis hotline; or (iii) an emergency assistance service that is legally available to a minor; (B) limits data collection to information: (i) collected in compliance with the Children's Online Privacy Protection Act of 1998 (15 U.S.C. Section 6501 et seq.); and (ii) necessary for the provision of emergency services; (C) allows a user to access and use the software application without requiring the user to create an account with the software application; and (D) is operated by or in partnership with: (i) a governmental entity; (ii) a nonprofit organization; or (iii) an authorized emergency service provider; or (2) the purchase or download of a software application that is operated by or in partnership with a nonprofit organization that: (A) develops, sponsors, or administers a standardized test used for purposes of admission to or class placement in a postsecondary educational institution or a program within a postsecondary educational institution; and (B) is subject to Subchapter D, Chapter 32, Education Code. Sec. 121.023. DISPLAY OF AGE RATING FOR SOFTWARE APPLICATION. (a) If the owner of an app store that operates in this state has a mechanism for displaying an age rating or other content notice, the owner shall: (1) make available to users an explanation of the mechanism; and (2) display for each software application available for download and purchase on the app store the age rating and other content notice. (b) If the owner of an app store that operates in this state does not have a mechanism for displaying an age rating or other content notice, the owner shall display for each software application available for download and purchase on the app store: (1) the rating under Section 121.052 assigned to the software application; and (2) the specific content or other elements that led to the rating assigned under Section 121.052. (c) The information displayed under this section must be clear, accurate, and conspicuous. Sec. 121.024. INFORMATION FOR SOFTWARE APPLICATION DEVELOPERS. The owner of an app store that operates in this state shall, using a commercially available method, allow the developer of a software application to access current information related to: (1) the age category assigned to each user under Section 121.021(b); and (2) whether consent has been obtained for each minor user under Section 121.022. Sec. 121.025. PROTECTION OF PERSONAL DATA. The owner of an app store that operates in this state shall protect the personal data of users by: (1) limiting the collection and processing of personal data to the minimum amount necessary for: (A) verifying the age of an individual; (B) obtaining consent under Section 121.022; and (C) maintaining compliance records; and (2) transmitting personal data using industry-standard encryption protocols that ensure data integrity and confidentiality. Sec. 121.026. VIOLATION. (a) The owner of an app store that operates in this state violates this subchapter if the owner: (1) enforces a contract or a provision of a terms of service agreement against a minor that the minor entered into or agreed to without consent under Section 121.022; (2) knowingly misrepresents information disclosed under Section 121.022(f)(1); (3) obtains a blanket consent to authorize multiple downloads or purchases; or (4) shares or discloses personal data obtained for purposes of Section 121.021, except as required by Section 121.024 or other law. (b) The owner of an app store is not liable for a violation of Section 121.021 or 121.022 if the owner of the app store: (1) uses widely adopted industry standards to: (A) verify the age of each user as required by Section 121.021; and (B) obtain parental consent as required by Section 121.022; and (2) applies those standards consistently and in good faith. Sec. 121.027. CONSTRUCTION OF SUBCHAPTER. Nothing in this subchapter may be construed to: (1) prevent the owner of an app store that operates in this state from taking reasonable measures to block, detect, or prevent the distribution of: (A) obscene material, as that term is defined by Section 43.21, Penal Code; or (B) other material that may be harmful to minors; (2) require the owner of an app store that operates in this state to disclose a user's personal data to the developer of a software application except as provided by this subchapter; (3) allow the owner of an app store that operates in this state to use a measure required by this chapter in a manner that is arbitrary, capricious, anticompetitive, or unlawful; (4) block or filter spam; (5) prevent criminal activity; or (6) protect the security of an app store or software application. SUBCHAPTER C. DUTIES OF SOFTWARE APPLICATION DEVELOPERS Sec. 121.051. APPLICABILITY OF SUBCHAPTER. This subchapter applies only to the developer of a software application that the developer makes available to users in this state through an app store. Sec. 121.052. DESIGNATION OF AGE RATING. (a) The developer of a software application shall assign to each software application and to each purchase that can be made through the software application an age rating based on the age categories described by Section 121.021(b). (b) The developer of a software application shall provide to each app store through which the developer makes the software application available: (1) each rating assigned under Subsection (a); and (2) the specific content or other elements that led to each rating provided under Subdivision (1). Sec. 121.053. CHANGES TO SOFTWARE APPLICATIONS. (a) The developer of a software application shall provide notice to each app store through which the developer makes the software application available before making any significant change to the terms of service or privacy policy of the software application. (b) For purposes of this section, a change is significant if it: (1) changes the type or category of personal data collected, stored, or shared by the developer; (2) affects or changes the rating assigned to the software application under Section 121.052 or the content or elements that led to that rating; (3) adds new monetization features to the software application, including: (A) new opportunities to make a purchase in or using the software application; or (B) new advertisements in the software application; or (4) materially changes the functionality or user experience of the software application. Sec. 121.054. AGE VERIFICATION. (a) The developer of a software application shall create and implement a system to use information received under Section 121.024 to verify: (1) for each user of the software application, the age category assigned to that user under Section 121.021(b); and (2) for each minor user of the software application, whether consent has been obtained under Section 121.022. (b) The developer of a software application shall use information received from the owner of an app store under Section 121.024 to perform the verification required by this section. Sec. 121.055. USE OF PERSONAL DATA. (a) The developer of a software application may use personal data provided to the developer under Section 121.024 only to: (1) enforce restrictions and protections on the software application related to age; (2) ensure compliance with applicable laws and regulations; and (3) implement safety-related features and default settings. (b) The developer of a software application shall delete personal data provided by the owner of an app store under Section 121.024 on completion of the verification required by Section 121.054. (c) Notwithstanding Subsection (a), nothing in this chapter relieves a social media platform from doing age verification as required by law. Sec. 121.056. VIOLATION. (a) Except as provided by this section, the developer of a software application violates this subchapter if the developer: (1) enforces a contract or a provision of a terms of service agreement against a minor that the minor entered into or agreed to without consent under Section 121.054; (2) knowingly misrepresents an age rating or reason for that rating under Section 121.052; or (3) shares or discloses the personal data of a user that was acquired under this subchapter. (b) The developer of a software application is not liable for a violation of Section 121.052 if the software developer: (1) uses widely adopted industry standards to determine the rating and specific content required by this section; and (2) applies those standards consistently and in good faith. (c) The developer of a software application is not liable for a violation of Section 121.054 if the software developer: (1) relied in good faith on age category and consent information received from the owner of an app store; and (2) otherwise complied with the requirements of this section. SUBCHAPTER D. ENFORCEMENT Sec. 121.101. DECEPTIVE TRADE PRACTICE. A violation of this chapter constitutes a deceptive trade practice in addition to the practices described by Subchapter E, Chapter 17, and is actionable under that subchapter. Sec. 121.102. CUMULATIVE REMEDIES. The remedies provided by this chapter are not exclusive and are in addition to any other action or remedy provided by law. SECTION 2. It is the intent of the legislature that every provision, section, subsection, sentence, clause, phrase, or word in this Act, and every application of the provisions in this Act to every person, group of persons, or circumstances, is severable from each other. If any application of any provision in this Act to any person, group of persons, or circumstances is found by a court to be invalid for any reason, the remaining applications of that provision to all other persons and circumstances shall be severed and may not be affected. SECTION 3. This Act takes effect January 1, 2026.

______________________________ 	______________________________
   President of the Senate 	Speaker of the House     

       I hereby certify that S.B. No. 2420 passed the Senate on
April 16, 2025, by the following vote: Yeas 30, Nays 1; and that
the Senate concurred in House amendments on May 14, 2025, by the
following vote: Yeas 30, Nays 1.
  

______________________________
Secretary of the Senate    

       I hereby certify that S.B. No. 2420 passed the House, with
amendments, on May 9, 2025, by the following vote: Yeas 120,
Nays 9, three present not voting.
  

______________________________
Chief Clerk of the House   

  

Approved:
  
______________________________ 
            Date
  
  
______________________________ 
          Governor
19
 
 

Some excerpts:

Since February, Google researchers have observed two groups turning to a newer technique to infect targets with credential stealers and other forms of malware. The method, known as EtherHiding, embeds the malware in smart contracts, which are essentially apps that reside on blockchains for Ethereum and other cryptocurrencies. Two or more parties then enter into an agreement spelled out in the contract. When certain conditions are met, the apps enforce the contract terms in a way that, at least theoretically, is immutable and independent of any central authority.

  • The decentralization prevents takedowns of the malicious smart contracts because the mechanisms in the blockchains bar the removal of all such contracts.
  • Similarly, the immutability of the contracts prevents the removal or tampering with the malware by anyone.
  • Transactions on Ethereum and several other blockchains are effectively anonymous, protecting the hackers’ identities.
  • Retrieval of malware from the contracts leaves no trace of the access in event logs, providing stealth
  • The attackers can update malicious payloads at anytime

Creating or modifying smart contracts typically cost less than $2 per transaction, a huge savings in terms of funds and labor over more traditional methods for delivering malware.

Layered on top of the EtherHiding Google observed was a social-engineering campaign that used recruiting for fake jobs to lure targets, many of whom were developers of cryptocurrency apps or other online services. During the screening process, candidates must perform a test demonstrating their coding or code-review skills. The files required to complete the tests are embedded with malicious code.

20
 
 

Recently, Salesforce CEO Marc Benioff warned investors to avoid the "false prophets" of AI. Now, the Pope has brought real theological weight to the bot debate, hosting a Vatican seminar that called for global AI regulation and fair distribution of the technology's benefits.

The seminar [PDF] – dubbed Digital Rerum Novarum: Artificial Intelligence for Peace, Social Justice, and Integral Human Development – was organized by the Pontifical Academy of Social Sciences together with the University of Notre Dame.

The newly installed, American-born Pope Leo XIV, said, in a message to the attendees, that while it has great potential, AI poses deep questions, not least how to create a "more authentically just and human global society."

More bluntly, he quoted his predecessor to remind his flock, that "While undoubtedly an exceptional product of human genius, AI is 'above all else a tool.'"

21
22
23
24
25
 
 

Ubuntu Touch, Sailfish OS, Tizen, Mobian, etc.

view more: next ›