While it might seem daunting to consider such a volume of legislation, most of the proposed rules are reasonably similar. Typically, legislation attempts to harness AI tools that are used for high-stakes purposes, such as those that involve making decisions that impact people (e.g., hiring). And it does so along three dimensions: 1) requiring low levels of bias against protected classes of people, 2) suggesting some form of impact analysis, and 3) prohibiting invasions of data privacy.
Here’s the rub: these are all very basic requirements and belgium whatsapp number data any responsible developer (or deployer) of AI tools should be doing these anyway, as a matter of course.
In fact, the “audit” mentality suggests that a single-point-in-time evaluation (once per year?) is sufficient. It is not. how existing audits are conducted.
In this age of big data, complex AI solutions can and should be evaluated continuously to ensure they do not go off the rails. AI is a powerful tool that can be used for good, but it can easily have negative effects if not closely monitored. Consider a newborn baby – a being with unlimited potential to learn, grow, and impact the world. Yet no parent would allow this new person to live or explore without near constant supervision and care.