this post was submitted on 03 Nov 2025
34 points (79.3% liked)

Canada

10629 readers
533 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 4 years ago
MODERATORS
 

In August, Solomon announced the government had signed an agreement with Cohere to identify where “AI tools can improve public services.”

Cohere’s reported connection to the U.S. AI firm Palantir increases the alarm. Led by MAGA funder Peter Thiel, Palantir sees the Canadian company’s models being deployed to Palantir customers, possibly including U.S. defence and intelligence agencies.

top 30 comments
sorted by: hot top controversial new old
[–] BroBot9000@lemmy.world 20 points 3 days ago

Fuck using Ai, let alone using US Ai. It’s a fucking propaganda and misinformation spewing spyware.

[–] howrar@lemmy.ca 10 points 3 days ago

The article's criticisms seem to all be addressing problems with generative AI, whereas the places that are getting these grants (my lab included) don't do very much of that. It's nearly all "old-school" machines learning for modeling and various optimization problems.

The one problem I would agree with is the widening wealth inequality. I don't know what the solution to that would be. Ideally, it would involve getting rid of the current capitalist system rather than impeding technological progress. I don't feel okay with someone telling me that I'm not allowed to automate tasks that I don't like doing. In fact, I think everyone should have access to automations for things they need to do but don't want to do. Let us focus on the arts, not on doing laundry.

[–] AGM@lemmy.ca 13 points 3 days ago* (last edited 3 days ago) (1 children)

Fear mongering article imo. Canada does need to be very on top of this issue, because of the risks of not doing so. The national AI task force the government set up also has some good people on it. In reality, I would say Canada has not been moving fast enough on the topic.

https://www.canada.ca/en/innovation-science-economic-development/news/2025/09/government-of-canada-launches-ai-strategy-task-force-and-public-engagement-on-the-development-of-the-next-ai-strategy.html

[–] patatas@sh.itjust.works 7 points 3 days ago (1 children)

I don't see a single representative there from the arts community, the labour movement ... almost entirely industry folks.

Last I heard, there were 7500+ responses to the AI consultation survey - I filled it out, but almost every one of the roughly two dozen long-form questions was geared toward industry, and the bulk of my responses began with questioning the premise of what was being asked. None of this has been about fact-finding, it's about clearing a path to pouring billions of public dollars into an industry whose most apparent use case is surveillance.

[–] AGM@lemmy.ca 12 points 3 days ago (1 children)

No representation from labour? Did you miss the Senior Research Officer from CUPE?

Also, there is the Founding Director of the Center for Media, Technology and Democracy.

Your critique isn't totally unfair, but there is a lot of academia on the panel. It's not just industry, but it's not a group representative of all sectors that stand to be affected. There are definitely people I would also like to see on there who aren't part of it, especially on education. It's a task force and an initiative that is aligned with an already determined strategic mandate to achieve AI sovereignty, and to shape whatever that ultimately means. It is taking for granted that AI is going to be part of Canada's future in a big way. It is approached like a response to an arms race and how to keep up as best we can, not a fact finding mission. I don't think that's entirely unreasonable, as long as we have accountability on legislation that shapes what actually goes from strategy into budget and implementation, also via things like the Artificial Intelligence and Data Act that addresses the governance side. This group isn't governance, but strategy.

I also disagree the only use case is surveillance. That's also fear mongering, but it is definitely one of the concerning use cases. There are many concerning use cases. This is where we need other civil society pressure and accountability in parliament and the governance side to provide oversight and regulation.

It's not perfect, but it's not as terrifying as the Tyee article makes it out.

[–] patatas@sh.itjust.works 4 points 3 days ago (1 children)

Fair enough, I guess I missed the lone labour rep in between all the folks from Cohere.

[–] AGM@lemmy.ca 7 points 3 days ago (1 children)

There's like one person from Cohere.

[–] patatas@sh.itjust.works -2 points 3 days ago* (last edited 3 days ago) (1 children)

Was rhetorical, but sure OK, let's do this:

  • one person (the CEO!) from Cohere;
  • two people from Creative Destruction Labs;
  • one person (the CEO!) from CoLab software;
  • a VP from Moov.AI;
  • the chair of Build Canada, which is basically advocating for a Canadian version of DOGE policy;
  • executive chair of Coveo, a SaaS firm;
  • a partner from VC firm Inovia Capital;
  • president of the Council of Canadian Innovators, basically an industry lobbyist;
  • someone from RBC;
  • CTO of VDURA, a US software company;
  • CEO of Aptum, a US-owned service provider to data centers;
  • CEO of Digital Moment, a "charity" that pushes tech into education systems;
  • CEO of samdesk, an AI-powered surveillance company.

Edit: not to mention that pretty much every academic on there has a vested interest in getting public funding for their work.

[–] AGM@lemmy.ca 4 points 3 days ago (1 children)

You've listed 13 that are on the industry side, including one who bridges academia and commercialization. There's 11-12 who fall across civil society, academia and research. That doesn't seem wildly unbalanced to me, but nobody is saying it's perfect so feel free to suggest how you think it would be better structured and what categories you would look to form it around.

[–] patatas@sh.itjust.works 1 points 3 days ago* (last edited 3 days ago) (1 children)

As I also alluded to in my edit, most of the "academics" are people developing AI, rather than analysing it from different perspectives.

Philosophers of technology and/or science, academics in the humanities such as philosophers, or people who work in the theory of education, labour economists, civil rights groups and others working on understanding systemic oppression and bias, authors and musicians, to name a few of the types of folks who should be in the room when our government attempts to remake society in the tech-bro image.

Edit: also, like, saying "only half of this team are part of the industry that this panel is supposed to create a regulatory framework for" is kinda wild to me. Especially given how disruptive folks like Carney & Solomon claim this tech is. You'd think we would want like 90% advocacy and civil society groups discussing the complete upheaval of our social systems rather than literally half the people being the dead-eyed freaks trying to make billions for themselves before the planet burns to a crisp

[–] AGM@lemmy.ca 1 points 2 days ago (1 children)

we would want like 90% advocacy and civil society groups

If Canada had a national strategy group on achieving leadership in the arts, would you say 90% of members must be from outside the arts and not even experts on the arts who receive any public funding? What would that actually achieve?

This is a strategy group on making plans for how to achieve Canadian leadership in AI. The whole purpose of it is to provide an urgent response to a lack of industrial strategy in a rapidly growing and emerging space of critical importance. They have an objective to provide an industrial strategy document. If you don’t have voices at the table who are engaged in industry, there will be no point in even forming a group because it will never achieve the goal. Nonetheless, it still has substantial civil society representation and open consultation. You didn’t like the questions in the survey? They provided an email address to receive open-ended responses where you could send whatever feedback you wanted.

Also, government is not just one group.

For long-term AI guidance with annual reports, the government also has the Advisory Council on AI. It has a mandate to ensure AI development in alignment with Canadian values. Its mandate was also expanded this year. https://ised-isde.canada.ca/site/advisory-council-artificial-intelligence/en

And, there is the Safe and Secure AI Advisory Group that is focused on guiding policy wrt risks from AI. https://ised-isde.canada.ca/site/advisory-council-artificial-intelligence/en/safe-and-secure-ai-advisory-group

Still, none of these are passing legislation or allocating funds.

Government is not a monolith, and Canada is taking a layered approach to AI strategy, one layer of which is industrial policy. And, if Canadians don’t like the strategic guidance produced by any of these groups, they can pressure their representatives to shape the actual legislation around them.

Out of curiosity, what is the actual grounding of your beliefs about AI and AI policy? There is plenty to be concerned about, but your responses are also full of hyperbole. What are you basing them on?

[–] patatas@sh.itjust.works 0 points 2 days ago (1 children)

If Canada had a national strategy group on achieving leadership in the arts, would you say 90% of members must be from outside the arts

First off, I would love to see that happen. But this question misses the point. Would "leadership in the arts" have a massive impact on tech policy, in the way that "leadership on AI" is likely to impact the arts?

They have an objective to provide an industrial strategy document.

Right, this is the problem - nowhere, to paraphrase Jurassic Park, are they asking "should we do this", and instead they're only asking "how can we do this". If the discussion of "should" is off the table, then there is no point in me continuing this conversation here.

You didn’t like the questions in the survey? They provided an email address to receive open-ended responses

The entire survey was open-ended responses - well, other than a (pretty generous) character limit on the input fields.

if Canadians don’t like the strategic guidance produced by any of these groups, they can pressure their representatives to shape the actual legislation around them.

There has been loads of pushback. I have yet to see this government budge.

Out of curiosity, what is the actual grounding of your beliefs about AI and AI policy?

What is the "grounding" of any belief about anything? That's a much more interesting question, one that AI boosters would do well to think more deeply about.

[–] AGM@lemmy.ca 1 points 2 days ago (1 children)

Okay... wow. I even pointed you to two government groups working on other sides of the issue, but you're just ignoring the overall government approach.

The government approach isn't perfect, but I don't have interest in arguing with someone focused on establishing an ideological position, going back to hyperbole again and again, and responding to a reasonable question with stuff stuff like this:

What is the "grounding" of any belief about anything? That's a much more interesting question, one that AI boosters would do well to think more deeply about.

We can just leave it as agreeing to disagree. No point wasting anyone's time.

[–] patatas@sh.itjust.works 1 points 2 days ago

Um, I'm not ignoring it, it's simply that the "overall government approach" has been clearly spelled out by the Minister of AI, who has said he will not "over-index on regulation".

That's why we haven't had consultations on any other aspect of AI, only how we can help the industry make money.

As for your question about the grounding of my "belief" about AI - what kind of answer were you expecting, or would you not have acted dismissively toward?

[–] kbal@fedia.io 3 points 2 days ago

AI, fighter jets, economically unviable mining projects, attack submarines, oil pipelines, carbon capture boondoggles — Canada's government sure does have a lot of money to spend on things that don't look much like good investments.

[–] Darkcoffee@sh.itjust.works 7 points 3 days ago (3 children)

Look, I hate Carney as much as the next left of centre Canadian, but wanting to make sure we don't fall behind the rest of the world is not a bad thing... Now if he implements it and people's lives get worse because of it, then yeah.

[–] cecilkorik@lemmy.ca 19 points 3 days ago (1 children)

Wanting to fall behind the rest of the world is a good thing when the rest of the world is charging mindlessly towards a cliff.

[–] panda_abyss@lemmy.ca 4 points 3 days ago (2 children)

If you think all AI use is off a cliff you’re ignorant.

Throwing trillions at it? That’s bad. Exploring adoption and growing industry here? That’s pretty reasonable.

[–] patatas@sh.itjust.works 12 points 3 days ago

That doesn't require a Minister of AI, it doesn't require massive data centre approvals or incentives, and it definitely doesn't require a permissive regulatory environment that insulates AI companies from liability for harms caused.

[–] cecilkorik@lemmy.ca 7 points 3 days ago* (last edited 3 days ago)

So... throwing trillions at it is bad, but I'm not following the part where you implied I'm potentially ignorant. Do you, or don't you, want to fall behind one of the main countries that is throwing trillions at it, which you admit is bad? Is letting ourselves fall behind and proceeding very cautiously not reasonable? Did we not weather the 2008 financial crisis with much the same attitude?

Instructions unclear, got afraid of falling behind and accidentally tied my much smaller economy with a very sturdy rope to a country that is soon to be falling off a cliff.

[–] mrdown@lemmy.world 13 points 3 days ago

We have the talents do build our own ai and tools rather thsn keep depending on the usa

[–] patatas@sh.itjust.works 4 points 3 days ago

In what way would we "fall behind the rest of the world"? The article outlines a ton of reasons why pushing AI will make us all worse off.

[–] OliveMoon@lemmy.ca 1 points 3 days ago (1 children)

So, I’m asking for a citation, it’s not letting me post?

[–] patatas@sh.itjust.works 4 points 3 days ago (1 children)

I can see three posts saying "citation?" including the one above.

But I'm not sure exactly what you are asking to see a citation for

[–] OliveMoon@lemmy.ca 3 points 3 days ago (1 children)

Citation: Proof of what you’re posting.

[–] patatas@sh.itjust.works 2 points 3 days ago

The text part of the original post was a quote from the article

[–] OliveMoon@lemmy.ca 1 points 3 days ago
[–] OliveMoon@lemmy.ca 1 points 3 days ago
[–] OliveMoon@lemmy.ca 1 points 3 days ago (1 children)

My take. The wealthy world wide, not just Canada, not just USA, are embracing AI. Every time you use that self check-out, you’re losing EI, CPP, and taxes. Someone is losing a job. Why are you checking out your own items?? Because people aren’t willing to take a stand. People aren’t willing to wait in a line, and say “fuck you”. I AM NOT CHECKING OUT MY OWN ITEMS. But, you won’t do it will you? You won’t take a stand.

[–] patatas@sh.itjust.works 3 points 3 days ago

Personally, I don't use AI and I don't use the self-checkout. Obviously there are plenty of other automations that have become part of the fabric of daily life, but none that are so disempowering and deskilling to the user as AI.