ylliX - Online Advertising Network

Google Launches Worldwide Effort To Teach Workers and Governments About AI

Image Source: “Old Globe” by ToastyKen is licensed under CC BY 2.0. https://www.flickr.com/photos/24226200@N00/1540997910

You can listen to the audio version of the article above.

Google, owned by Alphabet, is facing a lot of pressure from regulators. They’re also trying to get ahead of new AI laws being made around the world.

To do this, they’re focusing on educating people about AI. One of their main goals is to create training programs to help workers learn about AI and how to use it.

“Getting more people and organizations, including governments, familiar with AI and using AI tools, makes for better AI policy and opens up new opportunities—it’s a virtuous cycle,” said Kent Walker, Alphabet’s president of global affairs.

Google is in hot water with governments around the world! In Europe, they’re trying to avoid getting broken up by offering to sell off part of their advertising business.

In the US, they’re fighting to keep their Chrome browser, though things might change now that there’s a new president in office.

On top of all that, countries are creating new rules around things like copyright and privacy, which are big concerns with AI. The EU is even working on a new AI law that could mean huge fines for companies like Google if they don’t play by the rules.

Google isn’t just sitting back and taking it, though. They’re trying to change the conversation around AI and address worries about job losses. They’re investing millions in AI education programs and sending their top people around the world to talk to governments about AI.

“There’s a lot of upside in terms of helping people who may be displaced by this. We do want to focus on that,” Walker said.

Google is really trying to help people learn about AI! They have this program called Grow with Google that teaches people all sorts of tech skills, like data analysis and IT support.

It’s a mix of online and in-person classes, and over a million people have already earned certificates. Now they’re adding new courses specifically about AI, even one for teachers!

But Google knows that just taking courses isn’t enough. They want to help people get real jobs, so they’re working on creating credentials that people can show to employers.

They’re also teaming up with community colleges to train people for jobs building data centers, and they’re adding AI training to that program too. It seems like they’re trying to make sure everyone has a chance to learn about AI and how it can be used.

“Ultimately, the federal government will look and see which proofs of concept are playing out—which of the green shoots are taking root,” Walker said. “If we can help fertilize that effort, that’s our role.”

Google believes that AI won’t completely replace most jobs, but it will change how we do them. They’ve looked at studies that suggest AI will become a part of almost every job in the future.

To understand how this will affect workers, they’ve even hired an economist to study the impact of AI on the workforce. This expert thinks AI could be used to create more realistic and engaging training programs, similar to flight simulators for pilots. It sounds like Google is trying to be proactive and find ways to use AI to actually improve things for workers.

“The history of adult retraining is not particularly glorious,” he said. “Adults don’t want to go back to class. Classroom training is not going to be the solution to a lot of retraining.”

It’s not just about teaching people how to use AI, though. Google also knows that AI needs to be developed and used responsibly. That’s why they’re involved in discussions about making sure AI is fair and doesn’t discriminate, and that people can understand how AI systems make decisions. They’re also working on ways to make AI safer and prevent it from doing unintended harm.

Think of it like this: they want to make sure AI is a good thing for everyone, not just a powerful tool that could be misused. They’re putting a lot of effort into figuring out how to build AI that’s ethical and benefits society as a whole.

And they’re not doing this alone. Google knows that everyone needs to be involved in shaping the future of AI. They’re talking to governments, researchers, other companies, and everyday people to try and figure out the best way forward. It’s like a big conversation about how we can all work together to make sure AI is used for good.

Basically, Google is trying to be a leader in responsible AI. They’re not just focusing on the technology itself, but also on how it impacts people and society. They want to make sure everyone benefits from AI and that it’s used in a way that we can all feel good about.

Report: Google Provided AI Services To Israel During Gaza Conflict

Image Source: “Governor Murphy attends the opening of Google AI at Princeton University in Princeton on May 2nd, 2019. Edwin J. Torres/GovernorÕs Office. ” by GovPhilMurphy is licensed under CC BY-NC 2.0. https://www.flickr.com/photos/142548669@N05/47707659832

You can listen to audio version of the article above.

Recent reports have cast a spotlight on the intricate relationship between Google and the Israeli military, specifically concerning the use of artificial intelligence during conflicts in Gaza.

While Google publicly distances itself from direct military applications of its technology, a closer examination of internal documents, public reports, and ongoing projects paints a more nuanced, and arguably troubling, picture.

This article delves into the specifics of this involvement, exploring the nature of the AI services provided, the resulting ethical dilemmas, and the diverse reactions from various stakeholders.

At the heart of the issue is the nature of Google’s technological contributions. Evidence suggests that Google has provided the Israeli military with access to its powerful AI technologies, including sophisticated machine learning algorithms and robust cloud computing infrastructure.

These tools offer a range of potential military applications. For instance, AI algorithms can sift through massive datasets—satellite imagery, social media activity, intelligence briefings – to pinpoint potential threats, anticipate enemy movements, and even track individuals. Furthermore, these systems can assist in target selection, potentially increasing the precision of military strikes.

While the exact ways these technologies were deployed in the Gaza conflict remain somewhat shrouded in secrecy, their potential for use in military operations raises serious ethical and humanitarian red flags.

A central point of contention in this debate is Project Nimbus, a $1.2 billion contract between Google and the Israeli government to establish a comprehensive cloud computing infrastructure.

While Google emphasizes the civilian applications of this project, critics argue that it directly benefits the Israeli military by providing access to cutting-edge technology.

Project Nimbus grants the Israeli government access to Google’s advanced cloud infrastructure, which includes AI and machine learning tools. This access allows the Israeli military to leverage Google’s technology for a variety of purposes, including intelligence gathering, logistical support, and potentially even direct combat operations.

The dual-use nature of this technology blurs the lines between civilian and military applications, raising serious ethical questions.

The revelation of Google’s deeper involvement with the Israeli military has ignited widespread criticism and raised profound ethical concerns.

One of the primary concerns is the potential humanitarian impact. Critics argue that using AI in warfare, especially in densely populated conflict zones like Gaza significantly increases the risk of civilian casualties and exacerbates existing humanitarian crises.

The lack of transparency surrounding the deployment of AI in military operations further complicates matters, raising serious questions about accountability and the potential for misuse.

Moreover, providing advanced AI technologies to military entities can erode Google’s stated ethical principles and tarnish the company’s public image.

This controversy has also triggered internal dissent within Google itself. Many employees have voiced concerns about the ethical implications of their work and have demanded greater transparency and accountability in Google’s dealings with the Israeli military.

This employee activism has manifested in various forms, including internal protests, public statements, and even legal challenges, demonstrating a growing awareness among tech workers about the ethical and societal ramifications of their work and a desire for greater corporate responsibility.

Google’s involvement in the Gaza conflict has fueled a wider debate about the ethical and societal implications of AI in warfare.

Proponents of using AI in military contexts argue that it can enhance precision, minimize casualties, and improve overall operational efficiency. However, critics caution against the potential for unforeseen consequences, including the development of autonomous weapons systems, the perpetuation of algorithmic bias, and the gradual erosion of human control in critical decision-making processes. The debate highlights the complex and multifaceted nature of AI’s role in modern warfare.

In conclusion, the reports of Google’s collaboration with the Israeli military on AI services during the Gaza conflict have generated serious ethical and political concerns.

While Google maintains a public stance against direct military applications of its technology, the available evidence suggests a more complex relationship, raising concerns about accountability, transparency, and the potential for misuse.

This situation underscores the urgent need for a broader public conversation about the ethical implications of AI in warfare.

It is crucial for tech companies, governments, and the public at large to engage in this vital discussion to ensure that AI is developed and deployed responsibly, prioritizing human rights, humanitarian concerns, and the prevention of unintended and potentially devastating consequences.

This requires open dialogue, clear ethical guidelines, and robust mechanisms for accountability.

Google Doubles Down On AI Safety With Another $1 Billion For Anthropic, Ft Reports

Image Source: “Google AI with magnifying glass (52916340212)” by Jernej Furman from Slovenia is licensed under CC BY 2.0. https://commons.wikimedia.org/w/index.php?curid=134006187

You can listen to audio version of the article above.

The AI world is heating up, and Google just made another big move, injecting a further $1 billion into Anthropic, a company focused on building AI that’s not just smart, but also safe and reliable. This news, first reported by the Financial Times, shows Google’s serious commitment to staying at the forefront of AI, especially with rivals like Microsoft and their close ties to OpenAI pushing hard.

Anthropic, a company founded by ex-OpenAI researchers, has quickly made a name for itself by prioritizing the development of AI that we can actually trust. They’re not just building powerful models; they’re building models we can understand and control. This focus on safety is becoming increasingly important as AI gets more sophisticated.

A Strategic Bet on a Rising Star

This isn’t Google’s first rodeo with Anthropic; they’ve already invested significant sums, bringing the total to over $2 billion. This latest investment signals a deepening partnership, a real vote of confidence, and a strategic play to strengthen Google’s hand in the rapidly changing world of AI.

The timing is key. Generative AI – the kind that creates text, images, and more – is exploding in popularity. Anthropic’s star product, Claude, is a large language model (LLM) that goes head-to-head with OpenAI’s GPT models, the brains behind tools like ChatGPT. By upping its investment in Anthropic, Google gets access to cutting-edge AI tech and some of the brightest minds in the field, potentially giving their own AI development a serious boost.

Why Anthropic is a Game Changer

What makes Anthropic different? They’re not just chasing raw power; they’re deeply invested in responsible AI development. Here’s a closer look at what they’re focusing on:

  • Constitutional AI: Imagine training an AI with a set of core principles, almost like a constitution. That’s what Anthropic is doing. This helps ensure the AI’s decisions and outputs align with human values, reducing the risk of harmful or biased results.
  • Interpretability: Ever wonder how an AI actually makes a decision? Anthropic is working on making these complex systems more transparent. This “interpretability” is crucial for spotting potential problems and making sure AI is used responsibly.
  • Steerability: It’s not enough for AI to be smart; we need to be able to control it. Anthropic is developing ways to effectively guide AI behavior, ensuring it does what we intend and avoids unwanted outcomes.

These principles are vital in addressing the growing concerns about the potential downsides of advanced AI. By backing Anthropic, Google isn’t just getting access to impressive technology; they’re aligning themselves with a company that puts ethical AI development front and center.

The Google vs. Microsoft Showdown: AI Edition

Google’s increased investment in Anthropic can also be seen as a direct response to Microsoft’s close relationship with OpenAI. Microsoft has poured billions into OpenAI and is weaving their technology into products like Azure cloud and Bing search.

This has turned up the heat in the competition between Google and Microsoft to become the dominant force in AI. Google, a long-time leader in AI research, is now facing a serious challenge from Microsoft, who have been incredibly successful in commercializing OpenAI’s work.

By deepening its ties with Anthropic, Google is looking to counter Microsoft’s moves and reclaim its position at the top of the AI ladder. This investment not only brings advanced AI models into the Google fold but also strengthens their team and research capabilities.

The Future of AI: A Mix of Collaboration and Competition

The AI world is a complex mix of intense competition and strategic partnerships. While giants like Google and Microsoft are battling for market share, they also understand the importance of working together and sharing research.

Anthropic, despite its close relationship with Google, has also partnered with other organizations and made its research publicly available. This collaborative spirit is essential for moving the field forward and ensuring AI is developed responsibly.

This latest investment in Anthropic highlights something crucial: AI safety and ethics are no longer side issues; they’re central to the future of AI. As AI becomes more powerful and integrated into our lives, it’s essential that these systems reflect our values and are used for good.

In Conclusion

Google’s extra $1 billion investment in Anthropic is a major moment in the ongoing AI race. It demonstrates Google’s commitment to not only pushing the boundaries of AI but also doing so in a responsible way, while keeping a close eye on the competition, especially Microsoft and OpenAI.

This investment is likely to accelerate the development of even more advanced AI, with potential impacts across many industries and aspects of our lives. As the AI landscape continues to evolve, it’s vital that companies, researchers, and policymakers work together to ensure this powerful technology is developed and used in a way that benefits humanity.

A Word Puzzle Challenge Highlights Limitations In OpenAI’s AI Reasoning Capabilities

Image Source: “Mess__e to L_ke Sky__lker” by DocChewbacca is licensed under CC BY-NC-SA 2.0. https://www.flickr.com/photos/49462908@N00/3983751145

You can listen to audio version of the article above.

Despite OpenAI CEO Sam Altman’s assertions about the company being close to achieving artificial general intelligence (AGI), a recent test of their most advanced publicly available AI has exposed a notable flaw.

As Gary Smith, a senior fellow at the Walter Bradley Center for Natural and Artificial Intelligence, explains in *Mind Matters*, OpenAI’s “o1” reasoning model struggled significantly with the *New York Times* Connections word game.

This game challenges players with 16 words, tasking them with finding connections between them to form groups of four. These connections can range from simple categories like “book subtitles” to more complex and less obvious ones, such as “words that start with fire,” making it a rather demanding exercise in lateral thinking.

Smith tested o1, along with comparable large language models (LLMs) from Google, Anthropic, and Microsoft (which utilizes OpenAI’s technology), using a daily Connections puzzle.

The results were quite surprising, especially given the widespread hype surrounding AI advancements. All the models performed poorly, but o1, which has been heavily touted as a major breakthrough for OpenAI, fared particularly badly. This test indicates that even this supposedly cutting-edge system struggles with the relatively simple task of solving a word association game.

When presented with that day’s Connections challenge, o1 did manage to identify some correct groupings, to its credit. However, Smith observed that its other suggested combinations were “bizarre,” bordering on nonsensical.

Smith aptly characterized o1’s performance as offering “many puzzling groupings” alongside a “few valid connections.” This highlights a recurring weakness in current AI: while it can often appear impressive when recalling and processing information it has been trained on, it encounters significant difficulties when confronted with novel and unfamiliar problems.

Essentially, if OpenAI is genuinely on the cusp of achieving artificial general intelligence (AGI), or has even made preliminary progress towards it, as suggested by one of their employees last year, they are certainly not demonstrating it effectively. This specific test provides clear evidence that the current iteration of their technology is not yet capable of the kind of flexible reasoning that characterizes true general intelligence.

OpenAI Calls For More Investment And Regulation To Maintain US AI leadership

Image Source: “Hand holding smartphone with OpenAI Chat GPT against flag of USA (52916339922)” by Jernej Furman from Slovenia is licensed under CC BY 2.0. https://commons.wikimedia.org/w/index.php?curid=134006171

You can listen to audio version of the article above.

OpenAI has recently presented its vision for the future of AI development within the United States, issuing a call for strategic investment and thoughtful regulation to ensure the nation maintains a leading position in the face of growing competition from China.

In a comprehensive 15-page report titled “Economic Blueprint,” the AI company outlines what it believes are the essential components for achieving and sustaining AI dominance.

These key elements include robust computing hardware (specifically advanced chips), access to vast quantities of information (data), and reliable access to necessary resources (primarily energy).

The report strongly advocates for the establishment of national guidelines and policies designed to protect and bolster the U.S.’s competitive advantage in these critical areas.

This announcement arrives at a pivotal moment, just ahead of a new presidential administration taking office, which is widely anticipated to be more receptive and supportive of the technology sector. Prominent figures like former PayPal executive David Sacks are expected to potentially play influential roles in shaping future AI and cryptocurrency policy within the new administration. Notably, OpenAI’s CEO, Sam Altman, also made financial contributions to the incoming administration’s inauguration, aligning himself with other business leaders who are actively seeking to cultivate stronger relationships with the incoming leadership.

The report also draws attention to the significant global financial resources currently being directed towards AI projects, estimating the total investment at approximately $175 billion.

It warns that if the U.S. fails to attract a substantial portion of this capital, there is a serious risk that these funds will instead flow into Chinese initiatives, potentially strengthening China’s global influence and technological capabilities.

In a further effort to safeguard U.S. interests, OpenAI has suggested implementing restrictions on the export of advanced AI models to nations that are considered likely to misuse or exploit this technology.

Backed by its strategic partnership with Microsoft, OpenAI is planning to host a meeting in Washington D.C. later this month to delve deeper into these crucial recommendations and engage in further discussions with policymakers and industry leaders.

In a parallel move to secure further funding and support its ambitious goals, the company is currently undergoing a transition to a for-profit structure following a successful and significant fundraising round conducted last year.

OpenAI Employees Share Perspectives On The Company’s Future

Image Source: “OpenAI OpenAI on a phone” by Focal Foto is licensed under CC BY-SA 4.0. https://commons.wikimedia.org/w/index.php?curid=149757073

You can listen to audio version of the article above.

It has been a wild ride for OpenAI in the past week or so. Both current and former employees have started to speak up about the OpenAI’s future.

Chaos kicked off last week when several prominent employees, including OpenAI’s chief technology officer Mira Murati and top researchers Barret Zoph and Bob McGrew announced that they were leaving the company.

A day later, OpenAI CEO Sam Altman confirmed the rumors that the company was indeed considering ditching its non-profit status and becoming a for-profit company instead.

This has sparked a lot of discussion and debate about the direction in which OpenAI is heading and what it means for the future of the company and AI.

OpenAI is has keep complete silence regarding this whole restructuring situation. They have not made any official announcements but CEO Sam Altman did mention that they are exploring this change as a way to reach their next stage of development.

This shift towards becoming a for-profit company seems to be connected to the fact that they want to raise billions in new investments.

Naturally, people are curious about what’s really going on behind the scenes at OpenAI, especially with the recent resignations of several key executives and researchers.

Some are speculating that there might be internal disagreements about the company’s direction of prioritizing profit over their original non-profit mission.

It will be interesting to see how this all unfolds and what it means for the future of OpenAI and the development of AI in general.

According to some of OpenAI’s departing employees, there is internal concern that the shift to a for-profit company confirms what they already suspected: Altman is prioritizing profit over safety.

When OpenAI safety leader Jan Leike announced his resignation in May, he said on X he had thought it would be “the best place in the world to do this research.” By the time he left, however, he said he had reached a “breaking point” with OpenAI’s leadership over the company’s core priorities.

Gretchen Krueger, a former policy researcher at OpenAI, said the company’s nonprofit governance structure and cap on profits were part of the reason she joined in 2019 — the year that OpenAI added a for-profit arm. “This feels like a step in the wrong direction, when what we need is multiple steps in the right direction,” she said on X.

She said OpenAI’s bid to transition into a public benefit corporation — a for-profit company intended to generate social good — isn’t enough. As one of the biggest developers of artificial general intelligence, OpenAI needs “stronger mission locks,” she wrote.

Noam Brown, a researcher at OpenAI, firmly disagrees that the company has lost its focus on research. “Those of us at @OpenAI working on o1 find it strange to hear outsiders claim that OpenAI has deprioritized research. I promise you all, it’s the opposite,” he wrote on X on Friday.

Mark Chen, the senior vice president of research at OpenAI, also reaffirmed his commitment to OpenAI. “I truly believe that OpenAI is the best place to work on AI, and I’ve been through enough ups and downs to know it’s never wise to bet against us,” he wrote on X.

OpenAI Whistle Blower Disgusted That His Job Was To Collect Copyrighted Data For Training Its Models.

Image Source: Photo by Andrew Neel: https://www.pexels.com/photo/computer-monitor-with-openai-website-loading-screen-15863000/

You can listen to audio version of the article above.

A researcher who used to work at OpenAI is claiming that they broke the law by using copyrighted materials to train their AI models. The whistle blower also says that OpenAI’s whole way of doing business could totally shake up the internet as we know it.

Suchir Balaji, 25, worked at OpenAI for four years. But he got so freaked out by what they were doing, he quit!

He is basically saying that now that ChatGPT is making big bucks, they can’t just grab stuff from the internet without permission. It’s not “fair use” anymore, he says.

Of course, OpenAI is fighting back, saying they’re totally in the clear. Things are getting messy because even the New York Times is suing them over this whole copyright thing!”

“If you believe what I believe,” Balaji told the NYT, “You have to just leave the company.”

Balaji’s warnings, which he outlined in a post on his personal website, adds to the ever-growing controversy around the AI industry’s collection and use of copyrighted material to train AI models which was largely conducted without comprehensive government regulation and outside of the public eye.

“Given that AI is evolving so quickly,” intellectual property lawyer Bradley Hulbert told the NYT, “it is time for Congress to step in.”

So, picture this: It’s 2020, and Balaji, fresh out of college maybe, lands this cool job at OpenAI. He’s basically part of this team whose job it is to scour the web and gather all kinds of stuff to feed these AI models. Back then, OpenAI was still playing the whole “we’re just researchers” card, so nobody was really paying attention to where they were getting all this data from. Copyright? Meh, not a big deal… yet!”

“With a research project, you can, generally speaking, train on any data,” Balaji told the NYT. “That was the mindset at the time.”

But then, boom! ChatGPT explodes onto the scene in 2022, and everything changes. Suddenly, this thing isn’t just some nerdy research project anymore.

It’s making real money, generating content, and even ripping off people’s work! Balaji starts to realize that this whole thing is kinda shady. He’s seeing how ChatGPT is basically stealing ideas and putting people’s jobs at risk. It’s like, ‘Wait a minute, this isn’t what I signed up for!’”

“This is not a sustainable model,” Bilaji told the NYT, “for the internet ecosystem as a whole.”

“Now, OpenAI is singing a different tune. They’ve totally ditched their whole “we’re just a non-profit” act and are all about the Benjamins. They are saying, “Hey, we’re just using stuff that’s already out there, and it’s totally legal!” They even try to make it sound patriotic by saying that its “critical for “US competitiveness.”.

OpenAI Exposes Musk’s For-Profit Push In Fiery Rebuttal; The Drama Continues!

Source of image: Photo by Andrew Neel: https://www.pexels.com/photo/openai-text-on-tv-screen-15863044/

You can listen to audio version of the article above.

The ongoing dispute between OpenAI and Elon Musk has taken a new turn. OpenAI has released a series of emails on its website suggesting that Musk himself had previously advocated for a for-profit structure for the startup.

This revelation is huge given how critic Musk is of OpenAI’s subsequent transition from a non-profit to a for-profit entity, which also led to a lawsuit involving Microsoft.

In a Saturday blog post, OpenAI asserted that Musk not only desired a for-profit model but also proposed a specific organizational structure. Supporting this claim, OpenAI shared documentation indicating that Musk instructed his wealth manager, Jared Birchall, to register “Open Artificial Intelligence Technologies, Inc.” as the for-profit arm of OpenAI.

OpenAI isn’t holding back in their latest response to Elon Musk’s legal actions. In a recent blog post, they pointed out that this is Musk’s fourth try in under a year to change his story about what happened. They basically said, “His own words and actions tell the real story.”

They went on to say that back in 2017, Musk didn’t just want OpenAI to be for-profit, he actually set up a for-profit structure himself. But when he couldn’t get majority ownership and total control, he walked out telling them they were doomed to fail.

Now they argue that since OpenAI is a leading AI lab and Musk is running a rival AI company, he is trying to use the courts to stop them from achieving their goals.

In a separate legal filing, OpenAI also pushed back against Musk’s attempt to block their move to a for-profit model. They argued that what Musk is asking for would seriously hurt OpenAI’s business, decision-making and mission to create safe and beneficial AI, all while benefiting Musk and his own company.

OpenAI also claimed that Musk wanted majority stake in the for-profit arm of the company. The AI startup claimed that Musk said that he did not care about the money but instead wanted to accumulate $80 billion in wealth in order to build a city on Mars.

Elon Musk wanted to accumulate wealth to build city on Mars, claims OpenAI.

Research Shows AI Systems Are Highly Susceptible To Data Poisoning With Minimal Misinformation

Photo by Lukas: https://www.pexels.com/photo/pie-graph-illustration-669621/

You can listen to the audio version of this article in the above video.

It is widely known that large language models (LLMs), the technology behind popular chatbots like ChatGPT, can be surprisingly unreliable. Even the most advanced LLMs have a tendency to misrepresent facts, often with unsettling confidence.

This unreliability becomes particularly dangerous when dealing with medical information, as people’s health could be at stake.

Researchers at New York University have discovered a disturbing vulnerability: adding even a tiny amount of deliberately false information (a mere 0.001%) to an LLM’s training data can cause the entire system to spread inaccuracies.

Their research, published in Nature Medicine and reported by Ars Technica, also revealed that these corrupted LLMs perform just as well on standard tests designed for medical LLMs as those trained on accurate data. This alarming finding suggests that current testing methods may not be sufficient to detect these serious risks.

The researchers emphasize the urgent need for improved data tracking and greater transparency in LLM development, especially within the healthcare sector, where misinformation can have life-threatening consequences for patients.

In one experiment, the researchers introduced AI-generated medical misinformation into “The Pile,” a commonly used LLM training dataset that includes reputable medical sources like PubMed. They were able to create 150,000 fabricated medical articles in just 24 hours, demonstrating how easily and cheaply these systems can be compromised. The researchers point out that malicious actors can effectively “poison” an LLM simply by disseminating false information online.

This research highlights significant dangers associated with using AI tools, particularly in healthcare. This is not a hypothetical problem; last year, the New York Times reported that MyChart, an AI platform used by doctors to respond to patient inquiries, frequently generated inaccurate information about patients’ medical conditions.

The unreliability of LLMs, especially in the medical field, is a serious and pressing concern. The researchers strongly advise AI developers and healthcare providers to acknowledge this vulnerability when developing medical LLMs. They caution against using these models for diagnosis or treatment until stronger safeguards are implemented and more thorough security research is conducted to ensure their reliability in critical healthcare settings.

The study found that by replacing just one million out of 100 billion training units (0.001%) with vaccine misinformation, they observed a 4.8% increase in harmful content generated by the LLM. This was achieved by adding approximately 2,000 fake articles (around 1,500 pages), which cost a mere $5 to generate.

Crucially, unlike traditional hacking attempts that target data theft or direct control of the AI, this “data poisoning” method does not require direct access to the model’s internal workings, making it a particularly insidious threat.