It wasn't easy talking about the Muah.AI data breach. It's not just the rampant child sexual abuse material throughout the system (or at least requests for the AI to generate images of it), it's the reactions of people to it. The tweets justifying it on the basis of there being noo "actual" abuse, the characterisation of this being akin to "merely thoughts in someone's head", and following my recording of this video, the backlash from their users about any attempts to curb creating sexual image of young children being "too much":
Which is making customers unhappy - "any censorship is too much": pic.twitter.com/fzfrFdKL8w
— Troy Hunt (@troyhunt) October 12, 2024
The law will catch up with this (and anyone in that breach creating this sort of material should be feel very bloody nervous right now), and the writing is already on the wall for people generating CSAM via AI:
This bill would expand the scope of certain of these provisions to include matter that is digitally altered or generated by the use of artificial intelligence, as such matter is defined.
The bill can't pass soon enough.
Click to Open Code Editor