Technology Academic Paper Calls Out Meta/Google/Microsoft Over Fake "Open" AI Models

Technical Related Threads

The Gays From LA

The Gays From LA Took My K.Flay Away
Hellovan Onion
Another title for this article might have been: "Google Caught Lobbying the EU to Be Made Exempt from New AI Law"

The authors argue many of them aren’t so open after all and that these terms are used in confusing and diverse ways that have more to do with aspiration and marketing than as a technical descriptor. The authors also interrogate how, due to the vast differences between large A.I. systems and traditional software, even the most maximally “open” A.I. offerings do not ensure a level playing field or facilitate the democratization of A.I.; in fact, large companies have a clear playbook for using their open A.I. offerings to leverage the benefits of owning the ecosystem and capture the industry.
...

The discussion around how mass utilization of corporate giants’ A.I. systems further entrenches their ownership over the entire landscape—in turn, chipping away at openness and giving them immense indirect power—is a crucial point of the paper. In evaluating Meta’s PyTorch and Google’s TensorFlow, the two dominant A.I. developmental frameworks, the authors cite how these frameworks do speed up the deployment process for those who use them, but to the massive benefit of Meta and Google.

“Most significantly, they allow Meta, Google, and those steering framework development to standardize AI construction so it’s compatible with their own company platforms—ensuring that their framework leads developers to create AI systems that, Lego-like, snap into place with their own company systems,” reads the paper. The authors continue that this enables these companies to create onramps for profitable compute offerings and also shapes the work of researchers and developers.

The takeaway is that, in A.I., labels like “open source” are not necessarily fact but rather language chosen by executives at powerful companies whose goals are to proliferate their technologies, capture the market, and boost their revenue.
...

And the stakes are high as these companies integrate A.I. into more of our world and governments rush to regulate them. In addition to seeing the recent proliferation of not-so-open “open” A.I. efforts, the authors said it was the lobbying by these companies that prompted them to undertake this research.

“What really set things off was observing the significant level of lobbying coming from industry players—like the Business Software Association, Google, and Microsoft’s GitHub—to seek exemption under the EU AI Act,” the authors said. “This was curious, given that these were the same companies that would, according to much of the rhetoric espousing ‘open’ AI’s benefits, be ‘disrupted’ were ‘open’ AI to proliferate.”



It's interesting how the EU was threatening to sanction Elon Musk for wanting to have free speech on an American social media platform, while at the same time allowing Google to request exemptions to the EU's new AI law. This is how corrupt and opportunistic the EU is. This is who the Ukrainians are whoring themselves too: a corrupt bureaucracy that does backroom deals with Big Tech like Google so they don't have to follow the same rules that apply to everyone else.

Despite all the AI experts coming out and warning us that we need to pull the reigns on AI and legislate it, Google wants lawlessness for itself while subjecting everyone else to corrupt, opportunistic laws.

Here's the paper itself, you can download the .pdf on the webpage:

This paper examines ‘open’ AI in the context of recent attention to open and open source AI systems. We find that the terms ‘open’ and ‘open source’ are used in confusing and diverse ways, often constituting more aspiration or marketing than technical descriptor, and frequently blending concepts from both open source software and open science. This complicates an already complex landscape, in which there is currently no agreed on definition of ‘open’ in the context of AI, and as such the term is being applied to widely divergent offerings with little reference to a stable descriptor.
...
We find that while a handful of maximally open AI systems exist, which offer intentional and extensive transparency, reusability, and extensibility– the resources needed to build AI from scratch, and to deploy large AI systems at scale, remain ‘closed’—available only to those with significant (almost always corporate) resources.

 
Top