Close Menu
The Daily PostingThe Daily Posting
  • Home
  • Android
  • Business
  • IPhone
    • Lifestyle
  • Politics
  • Europe
  • Science
    • Top Post
  • USA
  • World
Facebook X (Twitter) Instagram
Trending
  • Jennifer Lopez and Ben Affleck reveal summer plans after Europe trip
  • T20 World Cup: Quiet contributions from Akshar Patel, Kuldeep Yadav and Ravindra Jadeja justify Rohit Sharma’s spin vision | Cricket News
  • The impact of a sedentary lifestyle on health
  • Bartok: The World of Lilette
  • Economists say the sharp rise in the U.S. budget deficit will put a strain on Americans’ incomes
  • Our Times: Williams memorial unveiled on July 4th | Lifestyle
  • Heatwaves in Europe are becoming more dangerous: what it means for travelers
  • Christian Science speaker to visit Chatauqua Institute Sunday | News, Sports, Jobs
Facebook X (Twitter) Instagram
The Daily PostingThe Daily Posting
  • Home
  • Android
  • Business
  • IPhone
    • Lifestyle
  • Politics
  • Europe
  • Science
    • Top Post
  • USA
  • World
The Daily PostingThe Daily Posting
Europe

Europe’s failure on AI | City Journal

thedailyposting.comBy thedailyposting.comMarch 22, 2024No Comments

[ad_1]

On March 13, the European Union passed the world’s first comprehensive law to regulate artificial intelligence. The EU’s Artificial Intelligence Act has been touted by many as a landmark victory for ensuring “safe” and “human-centred” AI, and efforts are currently underway to promote the framework as a model for other countries. It’s starting.

The new law establishes a “risk-based approach” to regulating the deployment of AI systems within four levels of risk: minimal, limited, high and unacceptable. At first glance, this framework seems to make a lot of sense until you take a closer look at what falls into these categories. Indeed, while the law is broad and aggressively applied, it does surprisingly little to address the most devastating forms of AI risk, and instead addresses so-called fairness issues such as bias and discrimination. focused.

For example, “high risk” use cases include the use of AI in areas such as credit scoring, education, resume filtering, human resources management, and “discriminatory” use cases in areas such as medical devices and critical sectors. It bundles together common applications that can amplify “results.” An infrastructure where a flawed AI system can literally be deadly.

Under EU AI law, these “high-risk” systems must undergo formal “quality control” systems, from disclosure of datasets and detailed usage documentation, to the adoption of formal “quality control” systems before they are allowed to be sold on the market. Strict obligations will be imposed. Although most providers are able to self-certify, the legal risks of non-compliance make these burdens real. Some “high-risk” use cases may even require the kind of premarket approval that governments typically reserve for untested new drugs. Such stringent requirements might be justified, for example, for generative AI models that can design biological weapons rather than recruitment software.

At the heart of this law is the EU’s confusion about AI as a technology category. Algorithms and machine learning are not new, and in most cases are indistinguishable from basic software and statistics. It doesn’t matter whether an employer discriminates using machine learning or because of personal bias. For use cases that risk perpetuating bias, existing anti-discrimination laws are sufficient.

Part of this confusion may be due to the origins of EU AI law, which long predates the launch of ChatGPT and the subsequent acceleration of AI capabilities. Although the law has undergone many changes since it was first drafted four years ago, its basic approach reflects thinking from its early days. What makes the current wave of AI different is its scale. If regulations ignore important aspects of it, it will ensure that the definition of AI is absurdly overbroad.

EU member states heard these criticisms during the waning months of the debate and, to their credit, added a section dealing with “general purpose AI.” This means powerful AI models like ChatGPT, which exhibit common linguistic, visual, and reasoning abilities and may soon approach the status of artificial general intelligence (AGI). To access the European market, developers of these AI generalists (also known as “foundation models”) must comply with the EU Copyright Directive and submit a detailed summary of the content used to train their models. You need to create technical documentation about its functionality. . The largest such models (around GPT-4 size or larger) are considered “systemic” and require additional precautions to be taken, such as adversarial testing and the implementation of cybersecurity protections.

Applying special oversight to developers of systems like AGI is the most rational and targeted provision of EU AI law. This provision mirrors the White House’s executive order on AI, with an approach that would be wise for the United States to legislate as well. Unfortunately, even the best-tailored general purpose AI manufacturers will still face significant challenges in complying with the remaining dubious “high risk” provisions of EU AI law.

There are many narrow-minded AI applications for things like scoring resumes, but general-purpose systems like ChatGPT can do much more. How a generic model behaves is highly influenced by how the user asks the model to behave. This means there is no single standard way to measure model bias in certain “high risk” use cases. And that assumes “bias” in this use case. The context is also clearly defined.

EU AI law ignores these questions of technical feasibility. Instead, it simply instructs manufacturers of general-purpose AI to “cooperate to enable compliance for such high-risk AI system providers.” No one knows how this will work in practice, at least until the European Commission issues policy guidance. Nevertheless, non-compliance can result in fines of up to €35 million, or 7% of a company’s global annual revenue, so if a US developer releases their latest model in Europe, No one should be surprised if they start delaying releases. all.

As a result, passing EU AI legislation could impede the rapid and iterative deployment of defensive AI needed for institutional adaptation and may even make AI less secure. Worse, the EU’s efforts to export its framework abroad (such as through California’s digital envoy) risk conflating its debacle approach to AI regulation with AI safety itself, with AI It has the potential to polarize the most important U.S. AI politics. To ensure proper regulation.

A smarter law would tightly regulate truly catastrophic risks, impose oversight on AGI laboratories, and postpone more comprehensive forms of regulation until the fog clears. Nevertheless, the problems with EU AI law are much deeper than any single law. These are symptoms of the EU’s habit of regulating as if it had reached the end of history. The end of history is the legendary period when the basic structures of society reach a final equilibrium and leave the menial tasks of squeezing efficiency out of harmonized standards for energy conservation to technocrats. such as a kettle. If the advances in AI over the past four years have told us anything, it’s that history isn’t over yet.

Samuel Hammond is a senior economist at the American Innovation Foundation.

Photo: J Studios/DigitalVision (via Getty Images)

to donate

city ​​journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free market think tank. Interested in supporting the magazine? As a 501(c)(3) nonprofit organization, donations supporting MI and City Journal are fully tax deductible as provided by law (EIN #13-2912529) .

[ad_2]

Source link

thedailyposting.com
  • Website

Related Posts

Jennifer Lopez and Ben Affleck reveal summer plans after Europe trip

June 29, 2024

Heatwaves in Europe are becoming more dangerous: what it means for travelers

June 28, 2024

Mifflin County Travel Club’s European Adventures | News, Sports, Jobs

June 28, 2024
Leave A Reply Cancel Reply

ads
© 2025 thedailyposting. Designed by thedailyposting.
  • Home
  • About us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms of Service
  • Advertise with Us
  • 1711155001.38
  • xtw183871351
  • 1711198661.96
  • xtw18387e4df
  • 1711246166.83
  • xtw1838741a9
  • 1711297158.04
  • xtw183870dc6
  • 1711365188.39
  • xtw183879911
  • 1711458621.62
  • xtw183874e29
  • 1711522190.64
  • xtw18387be76
  • 1711635077.58
  • xtw183874e27
  • 1711714028.74
  • xtw1838754ad
  • 1711793634.63
  • xtw183873b1e
  • 1711873287.71
  • xtw18387a946
  • 1711952126.28
  • xtw183873d99
  • 1712132776.67
  • xtw183875fe9
  • 1712201530.51
  • xtw1838743c5
  • 1712261945.28
  • xtw1838783be
  • 1712334324.07
  • xtw183873bb0
  • 1712401644.34
  • xtw183875eec
  • 1712468158.74
  • xtw18387760f
  • 1712534919.1
  • xtw183876b5c
  • 1712590059.33
  • xtw18387aa85
  • 1712647858.45
  • xtw18387da62
  • 1712898798.94
  • xtw1838737c0
  • 1712953686.67
  • xtw1838795b7
  • 1713008581.31
  • xtw18387ae6a
  • 1713063246.27
  • xtw183879b3c
  • 1713116334.31
  • xtw183872b3a
  • 1713169981.74
  • xtw18387bf0d
  • 1713224008.61
  • xtw183873807
  • 1713277771.7
  • xtw183872845
  • 1713329335.4
  • xtw183874890
  • 1716105960.56
  • xtw183870dd9
  • 1716140543.34
  • xtw18387691b

Type above and press Enter to search. Press Esc to cancel.