ylliX - Online Advertising Network

ChatGPT’s Advanced AI Costs $200/mo Is Now Free For Windows Users

Image Source: “Microsoft Windows 3.1 Jpn box” by Darklanlan is marked with CC0 1.0. https://commons.wikimedia.org/w/index.php?curid=95530546

You can listen to the audio version of the article above.

Microsoft is making a bold move to make powerful AI more accessible. They’re giving users of their Copilot service what seems like unlimited access to OpenAI’s top-tier language model, GPT-4 Turbo (previously known as o1), through a new feature called “Think Deeper.”

The key here is that it’s essentially free (as part of the Copilot subscription). OpenAI itself charges a hefty $200 per month for unlimited access to GPT-4 Turbo with ChatGPT Pro, or offers limited access through the $20 per month ChatGPT Plus plan.

By including this powerful AI in Copilot, Microsoft is shaking up the AI landscape. This could be a game-changer for users who want to leverage advanced AI capabilities without breaking the bank.

On Wednesday, Microsoft’s chief of Microsoft AI, Mustafa Suleyman, announced that access to the o1 model would be available to Copilot users “everywhere at no cost.” Access to the model will be provided by Copilot’s “Think Deeper” function, which requires a few seconds to ponder and research an answer and spit out a response. Because the Copilot app on Windows is now just a PWA, or webpage, you can access it by either the Copilot app on Windows or via copilot.microsoft.com. You’ll need to sign in with a Microsoft account.

(The “Think Deeper” control in Copilot is essentially a toggle switch. Just make sure it’s “on,” or highlighted, before you enter your query.)

It seems like Microsoft is giving Copilot a serious upgrade with “Think Deeper”! It’s like Copilot has been hitting the books and is ready to tackle more complex tasks. Instead of just giving short, quick answers, Think Deeper is all about diving deep and giving you more thoughtful and detailed responses.

Don’t expect it to be like Google, though. It won’t give you up-to-the-minute news or search results. Think Deeper is more like an expert on things that don’t really change much, like explaining scientific concepts or analyzing historical events.

For example, it could help you understand how hurricanes form by explaining the water cycle and how evaporation plays a key role. Or, it could give you a detailed analysis of a historical event or a current situation (though keep in mind its knowledge is only up-to-date to October 2023).

And get this, Think Deeper can even write code for you and explain how it works! Imagine asking it to create a simple program that draws a maze based on your name, and it not only writes the code but also walks you through the process. Pretty cool, huh?

It sounds like Microsoft wants Think Deeper to be your go-to tool for in-depth research and creative problem-solving. It’s like having a super smart friend who can help you explore complex topics and tackle challenging projects.

So, it looks like Microsoft is being pretty generous with Think Deeper! They haven’t said anything about charging extra for it, even though they could probably get away with it considering how powerful it is. This is great news for users who want to explore its capabilities without worrying about hidden costs or subscription fees.

Of course, the AI world moves fast, and there’s already a newer, even more powerful model called o3. This one is supposedly amazing at tackling tough coding challenges and solving complex problems. But, as you might expect, it probably won’t be free.

This kind of highlights the ongoing competition in the AI space. OpenAI keeps pushing the boundaries with new models, and Microsoft is finding ways to make those advancements more accessible to users. It’ll be interesting to see how this plays out and what new AI innovations we’ll see in the future!

This move by Microsoft could be a real game-changer in the AI landscape. By offering free access to such a powerful AI model, they’re putting pressure on competitors like Google and OpenAI to rethink their pricing strategies.

It also raises questions about the future of AI accessibility and how these advancements will be made available to the wider public.

Will we see a trend towards more affordable or even free access to advanced AI tools? Or will companies continue to charge premium prices for the latest and greatest AI models?

Moreover, the integration of Think Deeper into Copilot could significantly impact how people use AI in their daily lives.

Imagine students using it to get help with complex research papers, writers using it to generate creative content, or programmers using it to debug code and learn new programming concepts.

The possibilities are endless, and it will be fascinating to see how users leverage this powerful tool to enhance their productivity and creativity.

As AI becomes more sophisticated and accessible, it’s likely to become an even more integral part of how we learn, work and interact with the world around us.

A Word Puzzle Challenge Highlights Limitations In OpenAI’s AI Reasoning Capabilities

Image Source: “Mess__e to L_ke Sky__lker” by DocChewbacca is licensed under CC BY-NC-SA 2.0. https://www.flickr.com/photos/49462908@N00/3983751145

You can listen to audio version of the article above.

Despite OpenAI CEO Sam Altman’s assertions about the company being close to achieving artificial general intelligence (AGI), a recent test of their most advanced publicly available AI has exposed a notable flaw.

As Gary Smith, a senior fellow at the Walter Bradley Center for Natural and Artificial Intelligence, explains in *Mind Matters*, OpenAI’s “o1” reasoning model struggled significantly with the *New York Times* Connections word game.

This game challenges players with 16 words, tasking them with finding connections between them to form groups of four. These connections can range from simple categories like “book subtitles” to more complex and less obvious ones, such as “words that start with fire,” making it a rather demanding exercise in lateral thinking.

Smith tested o1, along with comparable large language models (LLMs) from Google, Anthropic, and Microsoft (which utilizes OpenAI’s technology), using a daily Connections puzzle.

The results were quite surprising, especially given the widespread hype surrounding AI advancements. All the models performed poorly, but o1, which has been heavily touted as a major breakthrough for OpenAI, fared particularly badly. This test indicates that even this supposedly cutting-edge system struggles with the relatively simple task of solving a word association game.

When presented with that day’s Connections challenge, o1 did manage to identify some correct groupings, to its credit. However, Smith observed that its other suggested combinations were “bizarre,” bordering on nonsensical.

Smith aptly characterized o1’s performance as offering “many puzzling groupings” alongside a “few valid connections.” This highlights a recurring weakness in current AI: while it can often appear impressive when recalling and processing information it has been trained on, it encounters significant difficulties when confronted with novel and unfamiliar problems.

Essentially, if OpenAI is genuinely on the cusp of achieving artificial general intelligence (AGI), or has even made preliminary progress towards it, as suggested by one of their employees last year, they are certainly not demonstrating it effectively. This specific test provides clear evidence that the current iteration of their technology is not yet capable of the kind of flexible reasoning that characterizes true general intelligence.

OpenAI Calls For More Investment And Regulation To Maintain US AI leadership

Image Source: “Hand holding smartphone with OpenAI Chat GPT against flag of USA (52916339922)” by Jernej Furman from Slovenia is licensed under CC BY 2.0. https://commons.wikimedia.org/w/index.php?curid=134006171

You can listen to audio version of the article above.

OpenAI has recently presented its vision for the future of AI development within the United States, issuing a call for strategic investment and thoughtful regulation to ensure the nation maintains a leading position in the face of growing competition from China.

In a comprehensive 15-page report titled “Economic Blueprint,” the AI company outlines what it believes are the essential components for achieving and sustaining AI dominance.

These key elements include robust computing hardware (specifically advanced chips), access to vast quantities of information (data), and reliable access to necessary resources (primarily energy).

The report strongly advocates for the establishment of national guidelines and policies designed to protect and bolster the U.S.’s competitive advantage in these critical areas.

This announcement arrives at a pivotal moment, just ahead of a new presidential administration taking office, which is widely anticipated to be more receptive and supportive of the technology sector. Prominent figures like former PayPal executive David Sacks are expected to potentially play influential roles in shaping future AI and cryptocurrency policy within the new administration. Notably, OpenAI’s CEO, Sam Altman, also made financial contributions to the incoming administration’s inauguration, aligning himself with other business leaders who are actively seeking to cultivate stronger relationships with the incoming leadership.

The report also draws attention to the significant global financial resources currently being directed towards AI projects, estimating the total investment at approximately $175 billion.

It warns that if the U.S. fails to attract a substantial portion of this capital, there is a serious risk that these funds will instead flow into Chinese initiatives, potentially strengthening China’s global influence and technological capabilities.

In a further effort to safeguard U.S. interests, OpenAI has suggested implementing restrictions on the export of advanced AI models to nations that are considered likely to misuse or exploit this technology.

Backed by its strategic partnership with Microsoft, OpenAI is planning to host a meeting in Washington D.C. later this month to delve deeper into these crucial recommendations and engage in further discussions with policymakers and industry leaders.

In a parallel move to secure further funding and support its ambitious goals, the company is currently undergoing a transition to a for-profit structure following a successful and significant fundraising round conducted last year.

Unexpectedly High Demand for ChatGPT Pro Creates Financial Losses for OpenAI, According to Sam Altman

Image Source: “Smartphone with ChatGPT on keyboard (52917311050)” by Jernej Furman from Slovenia is licensed under CC BY 2.0. https://commons.wikimedia.org/w/index.php?curid=134006150

You can listen to audio version of the article above.

Sam Altman, the CEO of OpenAI, recently shared a surprising bit of news on X (formerly Twitter) about the company’s finances. He admitted that, despite how popular their ChatGPT Pro subscriptions are, they’re actually losing money on them.

He basically tweeted something like, “Here’s a crazy fact: we’re actually *losing* money on OpenAI Pro subscriptions! Demand is way higher than we expected.”

ChatGPT Pro, which gives users access to the powerful GPT-4 model, has become a global hit, including in India, where it costs around INR 17,000 (roughly $200) per month. This makes it a pretty attractive option for professionals, businesses, and students looking to boost their productivity and creativity. Pro users get perks like faster responses, guaranteed access even during peak times, and the enhanced capabilities of GPT-4.

But even with all those sign-ups, Altman’s comments point to a big issue: people are using it *way* more than OpenAI anticipated. The Pro subscription was meant to help cover the huge costs of running these massive AI models, but the sheer volume of usage is throwing a wrench in their financial plans. Running powerful AI models like GPT-4 is incredibly expensive because they require massive computing power, which translates to huge bills for cloud computing and server maintenance.

It’s a tricky balancing act for tech companies: they need to offer powerful, accessible tools while also making sure they can stay afloat financially. OpenAI aimed to make AI accessible with their Pro subscription pricing, but they’re now facing unexpected challenges that are forcing them to rethink their strategy.

India, in particular, has become a major market for ChatGPT. Professionals in fields like education, IT, and content creation are signing up for Pro to streamline their work and leverage the power of AI. The fact that it’s relatively more affordable in India compared to other regions has definitely contributed to its popularity there.

Altman’s admission has everyone wondering what OpenAI’s next move will be. This curiosity is even stronger given his recent prediction that AI could replace humans in many jobs by 2025.

OpenAI Employees Share Perspectives On The Company’s Future

Image Source: “OpenAI OpenAI on a phone” by Focal Foto is licensed under CC BY-SA 4.0. https://commons.wikimedia.org/w/index.php?curid=149757073

You can listen to audio version of the article above.

It has been a wild ride for OpenAI in the past week or so. Both current and former employees have started to speak up about the OpenAI’s future.

Chaos kicked off last week when several prominent employees, including OpenAI’s chief technology officer Mira Murati and top researchers Barret Zoph and Bob McGrew announced that they were leaving the company.

A day later, OpenAI CEO Sam Altman confirmed the rumors that the company was indeed considering ditching its non-profit status and becoming a for-profit company instead.

This has sparked a lot of discussion and debate about the direction in which OpenAI is heading and what it means for the future of the company and AI.

OpenAI is has keep complete silence regarding this whole restructuring situation. They have not made any official announcements but CEO Sam Altman did mention that they are exploring this change as a way to reach their next stage of development.

This shift towards becoming a for-profit company seems to be connected to the fact that they want to raise billions in new investments.

Naturally, people are curious about what’s really going on behind the scenes at OpenAI, especially with the recent resignations of several key executives and researchers.

Some are speculating that there might be internal disagreements about the company’s direction of prioritizing profit over their original non-profit mission.

It will be interesting to see how this all unfolds and what it means for the future of OpenAI and the development of AI in general.

According to some of OpenAI’s departing employees, there is internal concern that the shift to a for-profit company confirms what they already suspected: Altman is prioritizing profit over safety.

When OpenAI safety leader Jan Leike announced his resignation in May, he said on X he had thought it would be “the best place in the world to do this research.” By the time he left, however, he said he had reached a “breaking point” with OpenAI’s leadership over the company’s core priorities.

Gretchen Krueger, a former policy researcher at OpenAI, said the company’s nonprofit governance structure and cap on profits were part of the reason she joined in 2019 — the year that OpenAI added a for-profit arm. “This feels like a step in the wrong direction, when what we need is multiple steps in the right direction,” she said on X.

She said OpenAI’s bid to transition into a public benefit corporation — a for-profit company intended to generate social good — isn’t enough. As one of the biggest developers of artificial general intelligence, OpenAI needs “stronger mission locks,” she wrote.

Noam Brown, a researcher at OpenAI, firmly disagrees that the company has lost its focus on research. “Those of us at @OpenAI working on o1 find it strange to hear outsiders claim that OpenAI has deprioritized research. I promise you all, it’s the opposite,” he wrote on X on Friday.

Mark Chen, the senior vice president of research at OpenAI, also reaffirmed his commitment to OpenAI. “I truly believe that OpenAI is the best place to work on AI, and I’ve been through enough ups and downs to know it’s never wise to bet against us,” he wrote on X.

OpenAI’s New Push: AI Voices For A Wider Range Of Applications.

Image Source: “OpenAI logo with magnifying glass (52916339167)” by Jernej Furman from Slovenia is licensed under CC BY 2.0. https://commons.wikimedia.org/w/index.php?curid=134006159

You can listen to audio version of the article above.

The ChatGPT Creator Will Let Other Companies Build On Its Human Mimicking Voice Technology.

OpenAI, the folks behind ChatGPT, are letting any app developer use their tech to make their apps talk. Now by talk, I do not mean just talk but actually have a real conversation!

This could be huge because it means we will probably be chatting with all sorts of AI programs before we know it. This sounds super cool but to be fair, a little freaky as well.

As we are all aware that OpenAI’s “advanced voice mode,” has been available to pro subscribers since July. The advanced voice mode has 6 AI voices with the ability to sound casual and expressive.

Now this very same technology will be offered to the thousands of companies, which will of course pay to use OpenAI technology in their own products

The obvious reason to Open up its AI inventions to outsiders will eventually help grow OpenAI’s revenue from usage fees it charges each time an app taps its technology.

Regular influx of cash is crucial to OpenAI as it is seeking billions of dollars in new funding and considering restructuring to remove its business from the control of its existing nonprofit board.

OpenAI made the announcement at an event in San Francisco that it was opening access to its voice technology. At a news briefing ahead of the event, OpenAI executives showed how an app built on its voice technology could make a phone call to a business and place an order for chocolate strawberries.

Though the business was not real and the person who took down the order and asked questions that the AI voice nimbly responded to was an OpenAI executive role-playing.

“We want to make it possible to interact with AI in all of the ways you interact with a human being,” OpenAI chief product officer Kevin Weil said at the press briefing.

OpenAI Whistle Blower Disgusted That His Job Was To Collect Copyrighted Data For Training Its Models.

Image Source: Photo by Andrew Neel: https://www.pexels.com/photo/computer-monitor-with-openai-website-loading-screen-15863000/

You can listen to audio version of the article above.

A researcher who used to work at OpenAI is claiming that they broke the law by using copyrighted materials to train their AI models. The whistle blower also says that OpenAI’s whole way of doing business could totally shake up the internet as we know it.

Suchir Balaji, 25, worked at OpenAI for four years. But he got so freaked out by what they were doing, he quit!

He is basically saying that now that ChatGPT is making big bucks, they can’t just grab stuff from the internet without permission. It’s not “fair use” anymore, he says.

Of course, OpenAI is fighting back, saying they’re totally in the clear. Things are getting messy because even the New York Times is suing them over this whole copyright thing!”

“If you believe what I believe,” Balaji told the NYT, “You have to just leave the company.”

Balaji’s warnings, which he outlined in a post on his personal website, adds to the ever-growing controversy around the AI industry’s collection and use of copyrighted material to train AI models which was largely conducted without comprehensive government regulation and outside of the public eye.

“Given that AI is evolving so quickly,” intellectual property lawyer Bradley Hulbert told the NYT, “it is time for Congress to step in.”

So, picture this: It’s 2020, and Balaji, fresh out of college maybe, lands this cool job at OpenAI. He’s basically part of this team whose job it is to scour the web and gather all kinds of stuff to feed these AI models. Back then, OpenAI was still playing the whole “we’re just researchers” card, so nobody was really paying attention to where they were getting all this data from. Copyright? Meh, not a big deal… yet!”

“With a research project, you can, generally speaking, train on any data,” Balaji told the NYT. “That was the mindset at the time.”

But then, boom! ChatGPT explodes onto the scene in 2022, and everything changes. Suddenly, this thing isn’t just some nerdy research project anymore.

It’s making real money, generating content, and even ripping off people’s work! Balaji starts to realize that this whole thing is kinda shady. He’s seeing how ChatGPT is basically stealing ideas and putting people’s jobs at risk. It’s like, ‘Wait a minute, this isn’t what I signed up for!’”

“This is not a sustainable model,” Bilaji told the NYT, “for the internet ecosystem as a whole.”

“Now, OpenAI is singing a different tune. They’ve totally ditched their whole “we’re just a non-profit” act and are all about the Benjamins. They are saying, “Hey, we’re just using stuff that’s already out there, and it’s totally legal!” They even try to make it sound patriotic by saying that its “critical for “US competitiveness.”.

OpenAI Exposes Musk’s For-Profit Push In Fiery Rebuttal; The Drama Continues!

Source of image: Photo by Andrew Neel: https://www.pexels.com/photo/openai-text-on-tv-screen-15863044/

You can listen to audio version of the article above.

The ongoing dispute between OpenAI and Elon Musk has taken a new turn. OpenAI has released a series of emails on its website suggesting that Musk himself had previously advocated for a for-profit structure for the startup.

This revelation is huge given how critic Musk is of OpenAI’s subsequent transition from a non-profit to a for-profit entity, which also led to a lawsuit involving Microsoft.

In a Saturday blog post, OpenAI asserted that Musk not only desired a for-profit model but also proposed a specific organizational structure. Supporting this claim, OpenAI shared documentation indicating that Musk instructed his wealth manager, Jared Birchall, to register “Open Artificial Intelligence Technologies, Inc.” as the for-profit arm of OpenAI.

OpenAI isn’t holding back in their latest response to Elon Musk’s legal actions. In a recent blog post, they pointed out that this is Musk’s fourth try in under a year to change his story about what happened. They basically said, “His own words and actions tell the real story.”

They went on to say that back in 2017, Musk didn’t just want OpenAI to be for-profit, he actually set up a for-profit structure himself. But when he couldn’t get majority ownership and total control, he walked out telling them they were doomed to fail.

Now they argue that since OpenAI is a leading AI lab and Musk is running a rival AI company, he is trying to use the courts to stop them from achieving their goals.

In a separate legal filing, OpenAI also pushed back against Musk’s attempt to block their move to a for-profit model. They argued that what Musk is asking for would seriously hurt OpenAI’s business, decision-making and mission to create safe and beneficial AI, all while benefiting Musk and his own company.

OpenAI also claimed that Musk wanted majority stake in the for-profit arm of the company. The AI startup claimed that Musk said that he did not care about the money but instead wanted to accumulate $80 billion in wealth in order to build a city on Mars.

Elon Musk wanted to accumulate wealth to build city on Mars, claims OpenAI.

Research Shows AI Systems Are Highly Susceptible To Data Poisoning With Minimal Misinformation

Photo by Lukas: https://www.pexels.com/photo/pie-graph-illustration-669621/

You can listen to the audio version of this article in the above video.

It is widely known that large language models (LLMs), the technology behind popular chatbots like ChatGPT, can be surprisingly unreliable. Even the most advanced LLMs have a tendency to misrepresent facts, often with unsettling confidence.

This unreliability becomes particularly dangerous when dealing with medical information, as people’s health could be at stake.

Researchers at New York University have discovered a disturbing vulnerability: adding even a tiny amount of deliberately false information (a mere 0.001%) to an LLM’s training data can cause the entire system to spread inaccuracies.

Their research, published in Nature Medicine and reported by Ars Technica, also revealed that these corrupted LLMs perform just as well on standard tests designed for medical LLMs as those trained on accurate data. This alarming finding suggests that current testing methods may not be sufficient to detect these serious risks.

The researchers emphasize the urgent need for improved data tracking and greater transparency in LLM development, especially within the healthcare sector, where misinformation can have life-threatening consequences for patients.

In one experiment, the researchers introduced AI-generated medical misinformation into “The Pile,” a commonly used LLM training dataset that includes reputable medical sources like PubMed. They were able to create 150,000 fabricated medical articles in just 24 hours, demonstrating how easily and cheaply these systems can be compromised. The researchers point out that malicious actors can effectively “poison” an LLM simply by disseminating false information online.

This research highlights significant dangers associated with using AI tools, particularly in healthcare. This is not a hypothetical problem; last year, the New York Times reported that MyChart, an AI platform used by doctors to respond to patient inquiries, frequently generated inaccurate information about patients’ medical conditions.

The unreliability of LLMs, especially in the medical field, is a serious and pressing concern. The researchers strongly advise AI developers and healthcare providers to acknowledge this vulnerability when developing medical LLMs. They caution against using these models for diagnosis or treatment until stronger safeguards are implemented and more thorough security research is conducted to ensure their reliability in critical healthcare settings.

The study found that by replacing just one million out of 100 billion training units (0.001%) with vaccine misinformation, they observed a 4.8% increase in harmful content generated by the LLM. This was achieved by adding approximately 2,000 fake articles (around 1,500 pages), which cost a mere $5 to generate.

Crucially, unlike traditional hacking attempts that target data theft or direct control of the AI, this “data poisoning” method does not require direct access to the model’s internal workings, making it a particularly insidious threat.