Disclaimer and Privacy Policy

Friday, March 17, 2023

The problem with auto AI corrections v.02

When we make programs or assets with AI, if something needed to be modified, the AI still couldn't be held responsible with the consequences compared to human inventors. The problem also is that we couldn't really assess comprehensively the codes of AI, how do they get there, and how reliable the results are. If anything wrong happened, what if the AI was unable to correct "responsibly", while we as mere humans couldn't understand or comprehensively assess the weaknesses of the results, say the solutions were complex. 

How auditable are AIs?

No comments:

Post a Comment

The Other meaning of expertise

 Experts are good at what they do, if you zoom out to the long term, seasoned experts would be those who appreciate things related to their ...