OpenAI's Bold Moves and Regulatory Hurdles Reflect a Changing AI Industry
OpenAI is rapidly broadening its reach into sectors outside of consumer AI, most recently with the introduction of Project Mercury—a confidential project focused on automating financial modeling tasks that are typically handled by junior investment bankers. This initiative has recruited more than 100 ex-professionals from firms such as
JPMorgan Chase
,
Morgan Stanley
, and
Goldman Sachs
, and aims to develop AI that can replicate intricate financial models for deals like IPOs and leveraged buyouts. Contractors are compensated $150 an hour to evaluate these models and offer feedback, with the intention of automating repetitive work usually assigned to entry-level analysts, according to a
Storyboard18 report
. This effort highlights OpenAI's larger ambition to monetize its technology in areas such as finance, where automation could significantly alter established processes.
At the same time, OpenAI has introduced ChatGPT Atlas, an AI-driven web browser intended to rival Google's supremacy in online search. Initially released for macOS, Atlas incorporates the Operator AI agent to automate actions such as summarizing web pages, completing online forms, and making reservations, as reported by
The Outpost
. CEO Sam Altman called Atlas "the next step in web interaction," positioning it as a direct challenger to
Google
Chrome. The browser's debut has already impacted the market, with Alphabet's shares falling 3% after the news broke. Experts suggest that Atlas could threaten Google's ad revenue by decreasing dependence on conventional search results, which are central to the company's $500 billion market value.
As OpenAI expands, regulatory oversight is increasing due to concerns over accountability and ethical standards. The European Union's AI Act, which became law in August 2024, sets out rigorous rules for high-risk AI, including requirements for transparency and risk evaluation, according to a
National Law Review article
. The legislation divides AI systems into four risk levels, banning those considered "unacceptable risk"—such as systems that infringe on fundamental rights. OpenAI's general-purpose models, like the GPT series, are subject to stricter examination because of their broad influence. The EU has also suggested updates to its 1985 Product Liability Directive, expanding strict liability to cover AI and mandating that developers remain responsible for their products' safety. These steps are designed to tackle the difficulties of tracing responsibility and causation when AI decisions result in harm, an issue that is becoming more pressing as AI grows more autonomous.
Meta's latest policy update has added another layer of complexity to the regulatory scene. The company has prohibited third-party AI chatbots, including ChatGPT, from WhatsApp through new API guidelines, requiring OpenAI to end its integration by January 15, 2026, as noted by
LiveMint
. While Meta cites technical reasons like reducing server strain, the move is widely seen as a strategic effort to strengthen its own AI products across its platforms. This reflects a broader industry trend of major tech companies tightening their grip on AI ecosystems, as demonstrated by Google's recent rollout of Gemini AI in Chrome and its "AI Overviews" feature.
As OpenAI maneuvers through these changes, the tension between advancing technology and maintaining responsibility remains delicate. Although the company highlights AI's promise to boost efficiency, critics caution about unforeseen risks, especially in sensitive fields like finance and healthcare. The EU's regulatory approach, with its focus on human-centered AI and strict accountability, could become a model for international regulation, though its broad scope may pose compliance challenges for American companies operating in Europe.