A whole bunch of big tech companies, have joined a U.S.-based effort to advance responsible AI practices. The American AI Safety Institute Consortium (AISIC) will count Meta, Google, Microsoft and Apple among its members. Commerce Secretary Gina Raimondo just announced the group's many new members and said they will be responsible for carrying out the actions outlined by on artificial intelligence.
“The U.S. government has an important role to play in setting the standards and developing the tools we need to mitigate risks and harness the immense potential of artificial intelligence,” Raimondo said in a statement.
Biden's October executive order is far-reaching, so this consortium will focus on developing guidelines for “red teaming, capability assessments, risk management, safety and security, and the synthetic content watermark”.
Red Team it dates back to the Cold War. It refers to simulations in which the enemy was called the “red team.” In this case, the enemy would be a Those who engage in this practice will attempt to trick the AI into doing bad things, like leaking credit card numbers, via a quick hack. Once people know how to break the system, they can put better protections in place.
The synthetic content watermark is another important aspect of Biden's initial order. Consortium members will develop guidelines and actions to ensure users can easily identify AI-generated materials. I hope that and AI-enhanced misinformation. Watermarking has not yet been widely adopted, although this program will “facilitate and help standardize” the technical specifications underlying the practice.
The consortium's work is only just beginning, although the Commerce Department says it represents the largest collection of testing and evaluation teams in the world. Biden's executive order and this affiliated consortium are about all we have for now. Congress significant