Welcome to the start of our series of articles dedicated to promoting compliance and quality management in the development of AI-based products. We'll start with a well-founded overview of this essential topic.
This series is aimed at a wide range of professionals — not only developers, but also managers and buyers involved in the AI industry will gain valuable insights. We cover a wide range of relevant topics, including regulatory aspects, standards and practical advice for testing and documenting AI products.
In the first part of the series, we highlight the central importance of the topic. The following section provides specific, application-oriented examples. This series gives you the opportunity to dive deep into the facets of responsible AI development and expand your knowledge in a practical way.
Meaningful AI innovations must solve real problems and offer clear added value for individuals or society. They should be practical, easy to maintain and comply with legal and social standards. If an AI system ignores these aspects, it cannot fully develop its problem-solving potential. The evolution of AI can be divided into three development phases.
In the first phase, the focus was on showing the potential of machine learning and refining the underlying algorithms and training mechanisms. The second phase shifted the focus to challenges related to inference and application in a real environment. During this time, numerous companies and solutions in the area of MLOps emerged, with the industry focusing more on operational aspects to address costs and maintainability issues — a challenge that still exists today (Big Tech is struggling to turn AI hype into profits).
The third and current phase of AI development focuses on compliance. In a world in which AI applications are increasingly intervening in our everyday lives and touching more and more areas of life, discussions about rules and regulations are becoming increasingly important. Important milestones include the EU AI Act, the AI Bill of Rights and a number of other legislative initiatives. Recent initiatives include the G7 AI Code of Conduct and a new White House executive order, which provide direction for the responsible use and regulation of AI technologies.
Careful implementation and monitoring of these regulations is a central aspect of dealing with artificial intelligence. A comprehensive risk management system that covers all facets of AI is essential. The existence of a solid plan for testing and documenting AI processes is just as crucial. Questions like “How do I make sure my credit scoring algorithm doesn't discriminate?” or “How do I implement an effective AI risk management system?” are of paramount importance in this regard.
These challenges require sophisticated solutions to ensure the integrity and fairness of AI systems. Balancing innovation and compliance is a challenging but essential task that is critical to overall success. Excessive compliance can prevent rapid experimentation and testing, while a complete abstinence from compliance can result in technological solutions that are never implemented in reality and potentially endanger companies or society.
At Perelyn, the focus is on creating a balanced relationship right from the start and involving important stakeholders early on in the development process. This not only enables rapid prototyping in the innovation phase, but also creates reliable, secure and maintainable applications that can effectively solve real problems in the production environment.
In order to ensure rapid and cost-effective implementation, a clear understanding of all requirements from the outset is crucial. This includes comprehensive representation of relevant groups of people and testing for unintended distortions, as well as appropriate documentation of the results. Let's take a closer look at the various stages of the AI life cycle.
Careful compliance for AI systems is critical at every stage of the life cycle. The following examples illustrate some of the steps required to develop a compliant and trustworthy AI system. This list is not exhaustive and specific steps may vary depending on Your AI system's risk level Be unnecessary.
Before starting the programming and testing phase, it is essential to define the requirements and select appropriate metrics to measure these requirements. This step is similar to the traditional software development process but has some additional complexities.
It is particularly important to determine the limits within which the system should operate and how its normal functioning should be assessed. One example is the Operational Design Domain (ODD) concept, which is used in automated and autonomous systems, in particular self-driving or autonomous vehicles. It specifies the specific conditions under which the autonomous system should operate. ODD includes factors such as environmental, geographical, time of day, traffic, and weather conditions.
Based on the specified ODD, system requirements may differ. For example, a recommendation system may have different reliability requirements than a fully autonomous system. In addition, depending on the scope of application, considerations of fairness and explainability may be more important. Systems that automate the application process should prioritize fairness, while fairness may be less important in manufacturing quality control.
Documenting every phase of the development process is critical to ensure accountability and auditability. It is an indispensable tool for meeting legal and regulatory requirements, in addition to industry standards and recognized best practices. It enables developers, reviewers, or shareholders to understand the development of the system and understand the rationale behind design decisions.
There are currently no established standards in this area. However, there are several useful references that can provide guidance. Model cards or data cards, for example, can provide valuable suggestions. It's worth noting that many models on Hugging Face already have pre-filled data cards, which can be useful for specific needs. However, these only cover a limited part of the system documentation. Other factors to consider may include potential misuse, recognized restrictions, training needs for system operation, and other relevant aspects. The scope of these considerations depends on the specific type of risk and the use of the system.
A risk management system is a fundamental part of compliance. It is a systematic approach for identifying and classifying risks and deciding which mitigation measures should be prioritized. One generally accepted guideline is the ISO 31000 series. Failure Modes and Effects Analysis (FMEA) is one tool for an effective risk identification process. An adapted version for an AI system could look as follows:
To ensure compliance and accountability in AI systems, it is important to monitor training data and licenses of the data used. This helps to create transparency and traceability in the development process.
Documenting the properties of the test data set is also crucial. This provides insights into the data used for testing and helps to accurately assess the performance of the AI system. Tools like DVC can help automate tracking. In combination with MLOPS tools, such as MLflow, training data as well as model artifacts, performance metrics, and other important metadata can be monitored.
It is important that performance metrics are not the only decisive factors. To ensure fairness and avoid discrimination, it is essential to have a dedicated strategy that includes the steps and documentation required to address these concerns. This includes carrying out fairness analyses, which focus in particular on protected groups, prejudices, discrimination and fairness tests. By measuring and evaluating these factors, we can develop an AI system that avoids discriminatory biases as much as possible. Our experience shows that measuring fairness can be a complex task.
Monitoring is a critical aspect of ensuring the success of your innovation. By implementing efficient monitoring practices, you are trying to effectively identify any changes in the data or deviations from the concept, which enables you to address them promptly. This helps maintain the accuracy and reliability of your AI system over time.
In addition, a robust complaints management system allows you to address any issues or concerns raised by users or other stakeholders to ensure their satisfaction and confidence in your innovation. Prioritizing and investing in monitoring and complaint management can make a significant contribution to the long-term success and sustainability of your AI solutions.
In order to successfully create meaningful AI innovations, it is not just alogrithms that are decisive. It is also necessary to ensure that these advances comply with legal and ethical requirements. By taking these factors into account throughout the development process, from initial stages to post-launch monitoring, developers and compliance experts can help create AI solutions that are not only innovative, but also responsible and long-lasting.
For us at Perelyn, responsible AI is not just an ideal; it is at the core of our consulting practice. We guide companies through the complexities of AI development and ensure ethical compliance from basic system design to thorough risk management. Our mission is to equip organizations with transparent, fair, and accountable AI solutions that balance innovative advances with societal and regulatory expectations. At Perelyn, we're not just creating AI — we're building a future where technology advances with integrity and responsibility.
Artificial intelligence optimises networks, improves customer service through personalised chatbots, and enables new data-driven business models. However, challenges such as data privacy must be addressed. AI offers great potential for the industry.
Discover the impact of generative AI in retail. This technology unites departments, personalises content, and transforms customer experiences. Leading brands such as Coca-Cola and Walmart are already using their potential to optimise operations and drive innovation. Explore the future of retail with generative AI...
This article explores how large language models such as ChatGPT can be integrated with proprietary data. The recent advent of large language models opens up numerous opportunities for companies, but connecting with proprietary data is a challenge in such a transformation. This article discusses the various approaches to solve this problem, such as fine-tuning and contextual learning. In addition, the associated challenges and risks are addressed.