Governments around the world are setting out regulations for artificial intelligence (AI) to ensure its ethical and safe use.
The UK government has recently published a white paper on AI regulation which outlines several principles that should be followed when developing and deploying AI systems such as ensuring transparency, accountability and fairness while benefiting society as a whole.
The European Union (EU) also plans to introduce new laws governing the use of AI across the bloc which will require companies to be transparent about how they use AI while setting out rules for its development and deployment.
In addition to this, lawmakers in the United States are considering several bills regulating the use of AI such as conducting impact assessments before deploying AI systems.
China has also introduced new guidelines last year calling for safe, reliable and controllable use of AI while being developed transparently.
In Australia, the Government has published a set of “Voluntary Principles” to use of when applying AI.
It’s clear that governments around the world are taking AI regulation seriously as it becomes more prevalent in our lives.
Summary (courtesy of ChatGPT):
Governments in UK, EU, America and Australia are considering a variety of regulatory conditions for Artificial Intelligence usage.
According to the BBC, the British government has set out plans to regulate artificial intelligence with new guidelines on “responsible use”. Describing it as one of the “technologies of tomorrow”, the government said AI contributed £3.7bn ($5.6bn) to the UK economy last year. Critics fear the rapid growth of AI could threaten jobs or be used for malicious purposes. The term AI covers computer systems able to do tasks that would normally need human intelligence. This includes chatbots able to understand questions and respond with human-like answers, and systems capable of recognising objects in pictures. A new white paper from the Department for Science, Innovation and Technology proposes rules for general purpose AI, which are systems that can be used for different purposes. Technologies include, for example, those which underpin chatbot ChatGPT. As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety. There is concern that AI can display biases against particular groups if trained on large datasets scraped from the internet which can include racist, sexist and other undesirable material. AI could also be used to create and spread misinformation.
As a result many experts say AI needs regulation. However AI advocates say the tech is already delivering real social and economic benefits for people. And the government fears organisations may be held back from using AI to its full potential because a patchwork of legal regimes could cause confusion for businesses trying to comply with rules. Instead of giving responsibility for AI governance to a new single regulator, the government wants existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with their own approaches that suit the way AI is actually being used in their sectors. These regulators will be using existing laws rather than being given new powers. Michael Birtwistle, associate director from the Ada Lovelace Institute, carries out independent research, and said he welcomed the idea of regulation but warned about “significant gaps” in the UK’s approach which could leave harms unaddressed. “Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future.
“The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators,” he said.
The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:
- Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
- Transparency and “explainability”: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
- Fairness: AI should be used in a way which complies with the UK’s existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes
- Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
- Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI
Over the next year, regulators will issue practical guidance to organisations to set out how to implement these principles in their sectors. Science, innovation and technology secretary Michelle Donelan said: “Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.”
But Simon Elliott, partner at cybersecurity firm Dentons told the BBC the government’s approach was a “light-touch” that makes the UK “an outlier” against the global trends around AI regulation.
China, for example, has taken the lead in moving AI regulations past the proposal stage with rules that mandate companies notify users when an AI algorithm is playing a role.
“Numerous countries globally are developing or passing specific laws to address perceived AI risks – including algorithmic rules passed in China or the USA,” continued Mr Elliott.
He warned about the concerns that consumer groups and privacy activists will have over the risks to society “without detailed, unified regulation.”
He is also worried that the UK’s regulators could be burdened with “an increasingly large and diverse” range of complaints, when “rapidly developing and challenging” AI is added to their workloads.
In the EU, the European Commission has published proposals for regulations titled the Artificial Intelligence Act which would have a much broader scope than China’s enacted regulation.
“AI has been around for decades but has reached new capacities fuelled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement.
The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use,” Mr Breton added.
Meanwhile in the US The Algorithmic Accountability Act 2022 requires companies to assess the impacts of AI.
In Australia 8 Artificial Intelligence (AI) Ethics Principles set out by the Government are designed to ensure AI is safe, secure and reliable. They will help:
- achieve safer, more reliable and fairer outcomes for all Australians
- reduce the risk of negative impact on those affected by AI applications
- businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
The principles are voluntary. “We intend them to be aspirational and complement – not substitute –existing AI regulations and practices.”
By applying the principles and committing to ethical AI practices, you can:
- build public trust in your product or organisation
- drive consumer loyalty in your AI-enabled services
- positively influence outcomes from AI
- ensure all Australians benefit from this transformative technology.
- Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
- Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
- Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
- Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
- Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
- Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
- Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.