How AI struggles with bike paths and bias

Comment: Despite continued advances in AI, we still have not solved some of the most fundamental problems.

Image: iStock / Jolygon

We have been so worried about whether TO-powered robots will take our jobs, which we forgot to ask a much more basic question: will they take our bike paths?

That’s the question Austin, Texas, is currently struggling with, and it points to all sorts of unresolved issues related to AI and robots. The biggest of them? As revealed in Anacondas State of Data Science 2021 Report, the biggest concern computer scientists has with AI today is the possibility, even the probability, of bias in the algorithms.

SEE: Ethical policy for artificial intelligence (TechRepublic Premium)

Go over, robot

More on artificial intelligence

Leave it to Austin (tagline: “Keep Austin weird”) to be the first to battle robot overlords taking over their bike paths. If a robot resembling a “futuristic ice car” in your orbit seems harmless, consider what Jake Boone, vice president of the Austin Bicycle Advisory Council, has to say: “What if in two years we have hundreds of them on the road? ? ”

If this seems unlikely, consider how quickly electric scooters took over many cities.

So the problem is not really one of a group of Luddite cyclists trying to hammer progress away. Many of them recognize that yet another robot car is a car less on the road. In other words, the robots promise to facilitate traffic and improve air quality. Still, such benefits need to be weighed against the negatives, including clogged bike paths in a city where infrastructure is already stretched. (If you have not been in Austin traffic recently, yes, it is not comfortable.)

As a society, we have not had to struggle with issues like this. Not yet. But if “weird” Austin is an indicator, we’re going to have to think carefully about how we want to embrace AI and robots. And we are already late in tackling a much bigger problem than bike paths: bias.

Make algorithms fair

People struggle with bias, so it is not surprising that the algorithms we write also do (a problem that has held on for years). In fact, you should ask 3,104 computer scientists (as Anaconda did) to name the biggest problem in AI today, and they will tell you that it’s bias (Figure A).

Figure A


Image: Anaconda

This bias sneaks into the data we choose to collect (and retain), as well as the models we implement. Fortunately, we recognize the problem. What do we do about it now?

Today, only 10% of respondents said their organizations have already implemented a solution to improve justice and limit bias. Still, it is a positive sign that 30% plan to do so within the next 12 months, compared to only 23% in 2020. At the same time, 31% of respondents said they do not currently plan to ensure the model’s explainability and interpretability. (which permeability would help mitigate bias), 41% said they have already started working on doing so, or plan to do so within the next 12 months.

So are we still there? No. We still have a lot to work with bias in AI, just as we need to figure out more pedestrian issues like traffic on bike paths (or faults in car accidents involving self-driving cars). The good news? As an industry, we are aware of the problem and are increasingly working to solve it.

Disclosure: I work for AWS, but the views expressed herein are mine.

Also see

Give a Comment