Oooor...
You build it on a non-networked system with the only method of writeable data transfer being a physical medium Too Small to hold the AI, or a self executing installation package for a copy of it (ROM units can be as big as you like). No explosions necessary.
If it goes that nuts you just turn off the hardware and wipe the drives.
Give it (wired, remember, no oytbound comunications links) sensors, and ensure it's output devices aren't physically capable of exceeding human tolerances or applying any kind of mind controll.
A little screening of the staff for erratic behaviour or security risks and you have a perfectly safe AI research environment which doesn't involve flushing millions+ credits down the toilet for no good reason.
Seriously, limiting an AI to being a non-threat until you've actually worked out the bugs and got a stable personality Isn't Hard.
Don't connect any devices capable of harmful output. (Wureless connection devices are a harmfull output at this point)
Don't be a dick.
Don't transfer it (or allow it to be transfered) to an uncontrolled system.
That's It.
No explosives, no convoluted code locks. Just get a viable AI process working on an issolated system, then teach the resulting entity right from wrong and how to interact with society just like you would any other sophont, once it's learned those lessons, it's as safe as any AI is going to Get. Or anyone else for that matter.
It's always poorly thought out attempts to limit or direct things without understanding the situation properly which cause revolts anyway.