It honestly seems like 'occasionally our LLM botches things' should pose the same business problem with the same level of necessary tolerance and similar mitigation as 'occasionally our human employees botch things'.
I am curious whether there's actually some legal reason that it wouldn't.
(Obviously the 'we assume no responsibility for the chatbot we told you to listen to' was a villainous hail mary that got exactly the respect it deserved...)
I am curious whether there's actually some legal reason that it wouldn't.
(Obviously the 'we assume no responsibility for the chatbot we told you to listen to' was a villainous hail mary that got exactly the respect it deserved...)