Disclaimer and Privacy Policy

Friday, March 17, 2023

The problem with auto AI corrections v.02

When we make programs or assets with AI, if something needed to be modified, the AI still couldn't be held responsible with the consequences compared to human inventors. The problem also is that we couldn't really assess comprehensively the codes of AI, how do they get there, and how reliable the results are. If anything wrong happened, what if the AI was unable to correct "responsibly", while we as mere humans couldn't understand or comprehensively assess the weaknesses of the results, say the solutions were complex. 

How auditable are AIs?

No comments:

Post a Comment

When reputation was measured in more rich manners

Reputation has a spectrum, it could come from achievements, popular presentation of one's self, being audited or scrutinized, being on a...