The problem with auto AI corrections v.02

When we make programs or assets with AI, if something needed to be modified, the AI still couldn't be held responsible with the consequences compared to human inventors. The problem also is that we couldn't really assess comprehensively the codes of AI, how do they get there, and how reliable the results are. If anything wrong happened, what if the AI was unable to correct "responsibly", while we as mere humans couldn't understand or comprehensively assess the weaknesses of the results, say the solutions were complex. 

How auditable are AIs?

Komentar

Postingan populer dari blog ini

Matthew 6:34, worries and the system of money

Piracy and Expectation

The Golden Sticker v.07