Disclaimer and Privacy Policy

Friday, March 17, 2023

The problem with auto AI corrections v.02

When we make programs or assets with AI, if something needed to be modified, the AI still couldn't be held responsible with the consequences compared to human inventors. The problem also is that we couldn't really assess comprehensively the codes of AI, how do they get there, and how reliable the results are. If anything wrong happened, what if the AI was unable to correct "responsibly", while we as mere humans couldn't understand or comprehensively assess the weaknesses of the results, say the solutions were complex. 

How auditable are AIs?

No comments:

Post a Comment

Why innovation is scarce v.02

 Lets start by looking at the economy, people solve each other's problems and with money they remember each others' deeds. The world...