Balancing Innovation and Risk: Navigating Regulatory Frameworks in the World of Generative AI
Customer Success Manager
Apr 28, 2023
Generative AI with its abilities such as natural language processing and image generation has demonstrated the potential to transform industries and our daily lives. It is evident that this hot new area has been taking the world by storm for the past few years. Moreover, with the market for AI expected to grow twentyfold by 2030, the possibilities in Generative AI seem infinite. However, with new changes in the world, Generative AI also brought concerns about potential risks and ethical implications that drew increased attention to appropriate regulatory actions against AI.
The European Union Artificial Intelligence Act (The EU AI Act)
The European Commission proposes the EU AI Act in an effort to create a legal framework to AI in the EU. Popular Generative AI applications such as OpenAI’s ChatGPT fall under its provisions. Some critical aspects of the Act that can affect Generative AI include:
Risk-based categorization: Generative AI systems are categorized as minimal, limited, high, or unacceptable risk, with stricter regulations as the level increases.
Minimal: AI systems that have negligible risks to public rights or safety and will not be subject to regulatory requirements under the Act. Examples include basic AI-based applications such as spam filter applications.
Limited: AI systems that have a low probability or minimal impact on society and fall under some regulatory requirements such as data transparency but are not subject to the full Act’s requirements. Examples include AI-based chatbots, which enable interactions between the customer and AI.
High: AI systems that have a significant impact on society and will be subject to strict regulatory requirements such as accountability and human oversight in addition to transparency. Examples include facial recognition, autonomous cars, AI-assisted surgery, etc.
Unacceptable: AI systems that pose an extreme risk to society will be banned under Act. Examples range from government social scoring to voice assistance that encourages dangerous behaviors.
Compliance and transparency: High-risk AI systems must go through rigorous testing with proper data quality and management. And also, requires human oversight and transparency.
Steep fines for non-compliance: Developers that fail to comply with the Act may face fines of up to 6% of the company’s global revenue or 30 million euros, whichever is higher.
Other regulatory actions impacting Generative AI include:
National regulations: Countries aside from the EU such as US and Canada are working on their own set of AI regulations. The current state of the US regulations mainly focused on specific use cases like AI in recruitment. States such as California, Colorado, Connecticut, and Virginia have passed privacy legislation containing provisions governing AI decision-making. Companies must provide consumers with opt-out rights, transparency, and governance via impact assessments. Furthermore, the US is pushing for further refining regulatory policies in the future with the goal of balancing policy and innovation.
Industry self-regulation: AI developers are working on establishing internal ethical guidelines to promote the responsible development and deployment of their systems. One example could be BloombergGPT’s effort to eliminate toxic and biased languages in their dataset “FinPile” in order to ensure safe use for clients by implementing rigorous testing procedures and risk controls.
The Challenge to balance innovation and Risk in Generative AI
With all the existing and upcoming regulatory scrutiny awaiting Generative AI, the challenge is balancing innovation and risk in future developments. Some of the potential impacts brought by The EU AI Act and other regulatory actions include:
Promote responsible AI development: Complete regulatory frameworks can help to ensure generative AI technology is developed ethically and responsibly in order to mitigate potential harm to the public.
Boost confidence and trust in Generative AI: With clear rules and standards, the public will have more confidence. The EU AI Act survey shows over 35% of AI developers consider their systems high-risk. Therefore, with standard rules, the public will have more confidence in their daily use.
Possible hindrance to innovation: If the regulations become too strict it may limit the industry’s ability to develop more advanced applications. Since innovation will drive economic growth by creating new products and services, therefore attracting more investments and talent. Suppose the country chooses to adopt an overly strict regulatory policy. In that case, it will lose its appeal to investors and talents, slowing down its technological advance, therefore, losing its competitive edge over other countries.
Future Implications for Generative AI Startups
Compliance and Regulations: Startups will have to adhere to potential new regulatory requirements such as data transparency and explainability. Most importantly, startups must ensure their data privacy and security, focusing on ethical and responsible use. These requirements may need additional resources and investments which will increase the cost for startups.
VC Funding: Strict regulatory actions may increase the difficulty for startups to secure funding. However, if startups are able to prove their adherence to those regulations, it will increase investor confidence and attracts funding.
Competition: Startups that pivot towards adhering to the regulatory environment and prioritize responsible AI development can differentiate themselves from those who struggle to adapt.
While new regulations are still uncertain, the future of Generative AI still holds infinite opportunities. The EU AI Act is a perfect example of the world adapting to AI technologies by having a more comprehensive approach. It is important to balance innovation and risks in order to foster a future where generative AI benefits the public while maintaining ethical and responsible standards and minimizing potential harm.