points by aseipp 2 years ago

The kernel will work fine, but at minimum EL2 runs the Qualcomm Hypervisor (Gunyah) which prevents native KVM virtualization from taking place. This is true of all Snapdragon platforms.

Windows supports virtualization on the 8 Gen 3 only because they use a custom setup to load a signed binary blob ("applet") into the EL2 hypervisor, whose signature it is is hardcoded to accept, and that blob/applet then can be used by Windows as a kind of shim into EL2-land to spawn VMs, etc. But Qualcomm's hypervisor is always present and enforcing its security policy.

In practice every single modern system is running tons of binary firmware blobs, it's mostly where you draw the line on functionality and isolation of components (security, integrity, availability.) Here, Qualcomm does intentionally reduce some functionality, which is pretty bad when you consider that the UEFI spec for ARM mandates EL2 handover, I think, and they just ignore it.

kramerger 2 years ago

My experience from working a few years with qualcomm CPUs at a major home electronics brand:

1. Half of the EL3 and EL2 code is so old, it has to jump between aarch32 and aarch64 multiple times during the boot process.

2. The silicon is full of errors. There are also major security vulnerabilities due to Qualcomm doing their own slightly modified version of everything.

3. Not even their biggest customers (e.g. Samsung) is given the source code for the magical blobs used during boot.

4. Given these issues, the EL2 code is basically there to hold things together. It will never go away and they will never show you what it contains

thomastjeffery 2 years ago

> In practice every single modern system is running tons of binary firmware blobs

This is a problem we should be loud critics of. Proprietary firmware hurts us all, and practically benefits no one.

  • matheusmoreira 2 years ago

    Yeah. These days our operating systems don't actually operate the system anymore. Hardware manufacturers usurped our control of the machine. They think of Linux as the "user OS", to be virtualized and sandboxed away from the real computer.

    https://youtu.be/36myc8wQhLo

    • gary_0 2 years ago

      Only a secret and privileged few actually get to boot and talk to a modern physical CPU. The rest of us only get to run on top of an abstraction.

      Wake up, Neo. The Matrix has you...

    • StillBored 2 years ago

      And frankly that is as it should be. The OS has enough responsibility trying to arbitrate the collection of hardware resources while providing its own set of abstractions (filesystems, processes, etc) to the application layers.

      These computers are no longer simple cores with simple devices. If you want that go buy a DOS machine from the 1980's, or a arm7TDMI.

      The problem though is that companies invest in all this firmware, and become convinced that DIMM training, signal integrity/phy training, and algorithms which estimate the cooling capacity and thermal mass of the attached heatsink, or any of a hundred other things are somehow competitive advantages and deserve to be locked up behind closed doors rather than opensource. In some cases they are right, but that shouldn't keep them from publishing reference firmware sources and register documentation.

      So, really people complaining about proprietary firmware are sorta missing the point. Complain about the lack of documentation to create your own firmware, not that the company thinks they have a competitive advantage in that firmware.

      And also admit that what one needs is hardware/firmware abstractions that allow big kernels like linux to communicate with all the little cores in the machine working on specific tasks, be that NVMe for disks, AT command sets for modems, or ACPI for power management.

      • matheusmoreira 2 years ago

        What good is open source firmware when the hardware only accepts cryptographically signed proprietary blobs?

        • sliken 2 years ago

          Assuming you can verify the signed blob identical to the one you can build yourself, you can verify there's no intentional back doors or unintentional security issues.

          Not as good as being able to sign it yourself, but way better than not having the source.

          It also prevents an attacker from hacking the hardware in a way that would persist after a full reinstall of the OS.

          • matheusmoreira 2 years ago

            Yes, I agree. Source code and reproducible builds which can be cryptographically verified to be equivalent to the signed blob would go a long way towards making them trustworthy. Still denies us the freedom to modify them but at least trust could be assured.

      • 127361 2 years ago

        Not on Rockchip platforms as far as I am aware. The RK3588 is one of their highest performing SoCs, it has 4 Cortex-A76 cores running 2.4GHz thus making it somewhat close to desktop performance, without any of these blobs or locked down bootloaders. And mostly complete documentation[1] is available.

        1. https://github.com/FanX-Tek/rk3588-TRM-and-Datasheet/tree/ma...

dmitrygr 2 years ago
  > In practice every single modern system is running tons of binary firmware blobs

This one does not: https://www.amazon.com/ASUS-C100PA-DB02-10-1-inch-Chromebook...

The SoC's boot ROM is 32K, fully inspectable, does not linger once the OS is booted. Every other software component is built from source and you can replicate it

  • fragmede 2 years ago

    Even the broadcom-based wifi card? my read of https://wireless.wiki.kernel.org/en/users/drivers/brcm80211 says for the 4354 in the c100, you need firmware, for brcmfmac.

    • dmitrygr 2 years ago

      You are right.

      (I use a usb-to-ethernet dongle and the wifi card is disabled, but you are right in theory)

      • StillBored 2 years ago

        And you probably still have firmware on the main machine. Just about every modern usb controller offloads the USB packet arbitration/sequencing behind a microcontroller and a pile of fw. Ex XHCI is usually a 8051 and some firmware sitting on the other side of the XHCI register description. Its probably the same on the actual USB->ethernet device, where there is conceptually something like the cypress FX3 integrated with a ethernet mac/phy in the chip an a couple arm's running firmware to respond to the USB packets and act as a control plane for the data being DMA'ed to/from the ethernet buffers. Same with the disk, does it have NVMe, SD, emmc? Then likely there is another handful of arm device doing the load leveling and flash management on the "disk". Or for that matter the battery and charge controller might look dumb but has a little microcontroller integrating instantaneous charge/discharge information and adjusting charge current/etc.

        https://blog.einval.com/2022/04/19#firmware-what-do-we-do

        • mips_r4300i 2 years ago

          Agreed, people seem to only see blobs when they run on x86. A typical PC system probably has at least two dozen ancillary CPU cores spread out among the IO and peripherals alone.

          If I had a dollar for every 8051 that turned out to be inside a chip I designed around...

asddubs 2 years ago

What's EL2 exactly?

  • baby_souffle 2 years ago

    > What's EL2 exactly?

    Probably execution level 2.

  • Avery3R 2 years ago

    Exception Level 2 [1] They're analogous to "protection rings" on x86. Generally, EL0 is usermode, EL1 is kernel mode, EL2 is hypervisor, and EL3 is the "secure monitor"/firmware code, closest analogy I think would be SMM on x86. On top of all of that there's also trustzone with its own EL0 and EL1.

    1: https://developer.arm.com/documentation/102412/0103/Privileg...