Yoshua Bengio Among Investors Betting On Armilla AI To Eliminate Bias In Machine Learning
Armilla AI was launched with $ 1.5 million in pre-seed funding and with the goal of unearthing and correcting biased AI.
Investors in Armilla included the Spearhead Fund, Two Small Fish Ventures and C2 Ventures. The other investors were Yoshua Bengio along with Apstat partners Nicolas Chapados and Jean-François Gagné, as well as a few other undisclosed angel investors. The round ended at the start of the summer.
The new funds are helping launch Armilla AI and have already been used to hire the startup’s first group of engineers.
“AI models are making more critical decisions every day, which means they require new monitoring protocols that can ensure they are accurate, fair and limit potential abuse.”
Armilla provides customers with automated validation tools to test machine learning for robustness, accuracy, fairness, data drift, bias, and more. The company describes its platform as the first all-in-one quality assurance for machine learning.
“AI models are making more critical decisions every day, which means they require new monitoring protocols that can ensure they are accurate, fair and limit potential abuse,” said Yoshua Bengio , recipient of the ACM AM Turing award, founder of the Institut québécois de l’IA. Mila, and an investor Armilla.
“This growing need for independent validation requires the same attention and investment that was used to create the models themselves. This is how we build AI responsibly, ”added Bengio.
The newly formed startup is working with a few undisclosed partners and “solving real-life issues for them,” according to Armilla CPO Karthik Ramakrishnan.
The platform runs a series of rigorous tests on a range of scenarios, ”Ramakrishnan said. For example, financial institutions not wanting to discriminate against immigrants would generally remove immigrant status from the dataset.
But according to Ramakrishnan, this is not enough. It has been found that there is a strong correlation between immigrant status and multi-tenant units in a building, Ramakrishnan noted. “Why?” he asked rhetorically. “Because most immigrants tend to live in shared housing during the early years. So if you don’t also remove the multi-location data, your model implicitly becomes biased against immigrants now. “
These are the types of checks that need to be done before a company makes its AI models public, Ramakrishnan argued. “These are the challenges we see in real life. These are the kinds of things we don’t want to see reflected and create these systemic bias issues. “
The CPO noted that companies won’t just post something “willy-nilly,” and that they have risk and compliance teams who want to make sure that legal and reputational risks are in place. understood.
“We found that there was no system to do it in a systematic way,” Ramakrishnan said. “From our experience, we found it to be broken. Governments are also realizing this fact. “
For example, the Office of the Privacy Commissioner of Canada was looking at to policies around AI, and expressed concerns about its use: “We pay particular attention to AI systems given their rapid adoption for processing and analyzing large amounts of personal information. Their use to make predictions and decisions affecting individuals can introduce risks to privacy as well as unlawful bias and discrimination. “
The European Union is even more advanced in its legislation and has introduced a legal framework for AI as well as algorithmic liability law which can impose fines of up to six percent of a company’s revenue if it turns out that a company has not been rigorous in its use of AI.
Canada is also a founding member of the Global Partnership on Artificial Intelligence (GPAI). GPAI aims to share multidisciplinary research and identify key issues among AI practitioners, with the goal, among other things, of promoting trust and adoption of trustworthy AI.
“All of this means that most government regulators are realizing that AI is already ingrained in our society,” Ramakrishnan said. “And it’s one of those technologies that isn’t easy. It is not a physical object sitting in front of you. Decisions are made for you, about you, in the background. It’s going to be seamless. It’s concerning if we don’t know how these platforms or systems behave, and if they are not designed for security.
CEO Dan Adamson and CTO Rahm Hafiz trained Armilla with Ramakrishnan. All three previously worked in AI companies. Adamson was the founder and CEO of OutsideIQ, a company that developed proprietary cognitive computing research and analysis technology that uses both natural language processing and machine learning to think and act like a researcher. New York-based Demand, a company that helps businesses monitor compliance such as money laundering regulations, acquired OutsideIQ in 2017.
Hafiz worked as Director and Head of Cognitive Technologies for Require, while Ramakrishnan was Vice President, Head of Industry Solutions and Consulting at Element AI.
Toronto-based Armilla currently has nine employees and will generate revenue through the provision of SaaS. Launched in 2020, the company was in stealth mode until October 21.
Ramakrishnan said the company faces the normal start-up challenges of proper scale-up and revenue generation. He noted that with new legislation on the use of AI coming soon, Armilla doesn’t know how that might change the landscape.
“Customers are grappling with this,” he said. “So we have to act very quickly, so we are the first QA [quality assurance] for the ML platform. And when you’re the first, it’s unfamiliar waters.
Characteristic image source Unsplash