original
Hardware Black Boxes
Reprinted with permission from Alex Oliva.
Pondering about the true nature of the soul of hardware components (whether they're software or hardware) doesn't strike me as a useful way to reason about whether hardware is usable in software freedom.
Hardware is typically a black box. It's inescapable that, at some level, the machine will do what its designer made it to do, and there's nothing inherently wrong (as in unethical) about that.
Moreover, there's no expectation that you could be able to change it at that level, for it could be all hardware circuits, that are impossible to modify. Even if you could build another machine or component with the desired change, that wouldn't modify the original machine or component. There's no ethical imperative for that.
Even when you have access to its specifications and source code, which parts got compiled to hardware circuits and which were compiled into instructions for a general- or special-purpose programmable component is immaterial and irrelevant to tell whether the machine is usable in software freedom.
It's a black box. It could range from all hardware to a qemu layer on top of all hardware or of a qemu layer on top of... If you watched Inception, you get the idea. Or maybe not 
As with AGPLed software on a remote server, even with specifications and source code, you can't generally tell whether there are undisclosed malicious or undesirable features omitted from the sources.
Just as you can't generally tell whether friends really like you or just pretend to. It's the nature of black boxes, and if you worry too much about it, you may end up without friends, and without hardware.
Sure, if they exhibit malicious behaviors, you probably don't want them in your life.
For purposes of software freedom and ethics, what matters for programmable hardware is whether the machine is faithful to its programming model. If it takes your instructions and carries them out, you can use it as a black box for your computing in (software) freedom, whether the machine is on-premises hardware or a remote virtual machine.
Now, if you can tell that it takes instructions and commands from others, or sends information to others, this is outside the black box, and then there may be grounds for suspicion that those behaviors may be malicious, even if they don't directly interfere with the exposed programming model.
Software components outside the hardware black box bring with them an ethical issue that is not present in components inside the black box: they are visibly and indisputably software, and as such, you deserve control over what they do to your machine, even if what they do isn't properly your own computing. (I mention this because the fundamental guiding principle of free software philosophy is that you deserve control over the software you use to do your own computing; this goes a little beyond that.)
Being clearly outside the black box, these software components are not covered by the inescapable nature of hardware, not even theoretically: they're indisputably software, and software is executable and modifiable and shareable unless someone prevents you from doing these things by unethical means.
This post is about ethics, the core issue for free software, not about other issues such as security. For other issues, opening the black box may be important, whether it's software or hardware.
So blong, █
Image source: Alexandre Oliva