I have been exploring the secrets of the TPM. Is anything like the TPS report from Office
Space?
Almost exactly as useless. No, no, no, I just of course. So TPM stands for Trusted Platform
Module, which is not a terribly useful descriptive name. It's basically a bit of hardware on
your computer that can do things like encrypting, decrypting and storing small amounts of data,
but it also looks at your machine and tries to understand what state various bits of it
are in. And it can use those states to make decisions about whether or not it should
decrypt the data that you're asking it for.
The state's things like has the computer been taken apart and is it currently in a laboratory
in some foreign actor or something, or is it simpler than that?
Not necessarily that specific, but that sort of thing. It tries to essentially tell if
someone has tampered with the known state of the computer, like have they turned secure
boot off or has the boot load changed, has the firmware been changed, things like that.
And there's other things that the operating system can hook into, I think, as well. Now
the reason that I got interested in this is because I use full disk encryption on my
work laptop. And I find it a bit of a pain when I have to reboot, for example, recently
Ubuntu started requiring you to reboot to install some software updates. And the workflow
for that became reboot into your decryption password, let it run the up data, and then
it reboots again, and then you have to enter your encryption password again, and then it
boots back into your system. And that was a right pain having to do that. And you know,
other times that I might, you know, I haven't actually taken my laptop anywhere, so I'm
not worried about anyone. Yeah, I've not left its side. So it's not like it's someone
else trying to do anything. It's just me pressing reboot from my logged in system. And
you know, it would be nice if I didn't always have to enter the encryption password. So
I got wondering whether using the TPM might be a way around this. So I found some useful
and some less useful guides to this on the internet. One went into quite a bit of detail
about how to set up a script that ran boot and ran some commands against the TPM and
got it to give you the encryption key, which all seemed to run OK except that it didn't
actually get an encryption key out for some reason. When I ran the commands in a logged
in system, it worked. And when I, when it tried to do it, it just didn't get anything.
So I was obviously doing something wrong there. And then I found another tool called System
D with TPM2, which completely broke my system. I should say I was doing this on a VM. That's
fine. In fact, I was using quick MU with the TPM equals on option to test this out because
I didn't want to actually be doing this on a on a real system. But while I was doing
this, I ended up reading some man pages as you do and found in the man page for crypt tab
that you can configure crypt tab to point to your TPM directly without any additional tools
or utilities or scripts that you have to write yourself. And I imagine crypt tab is something
that works alongside FS tab to say these file systems are encrypted and here are the strategies
to provide decryption keys. That's exactly what it does. Yes. So you have a list of, here
are your encrypted devices and here what they're called. And then one of those names maps
to the thing that you're mounting in FS tab. Finding this, I emerged to do quite a bit
of digging and find that there's a command called System D dash crypt in role. And what
this does is generates a decryption key, adds it to your lux encrypted volume because you
can have multiple keys associated with the volume. And then it stores that in the TPM and
does some other wizardry in the background to connect things up. Then you edit your crypt
tab, you add an option there to say which device it is. If you've only got one TPM, you just
say auto and it finds it. And then you rebuild your init RAM FS, which is the system that
runs a boot to bootstrap everything. And then that will include the bits it needs to read
the stuff out of the TPM. And this was all going very well except I was trying to do it
on Ubuntu. And for some reason, when you try and rebuild init RAM FS on Ubuntu with this
option, it doesn't recognize it. And I tried a few different versions with different versions
of crypt setup, but it just wasn't having any of it, despite other people saying they
were successful doing this on other distributions. So I thought, well, someone said this works
on Fedora. I'll give that to go. And I've read exactly the same thing on Fedora, except
they use a utility called drake cut or something of that effect to rebuild init RAM FS. And
it worked. So I ended up with a system that I could boot into. And when I rebooted, it
would decrypt the disk without me having to do anything. Can I just clarify, you're doing
this all in a VM that's talking to the fake TPM module, and you're doing it with Fedora
in a VM, right? In a VM, right? Yes. And that's right, isn't it Martin? It's actually
it's a software TPM. Yeah. Yeah. So what Mark's using here is my QuickMU project, which
is a wrapper around QMU. And one of the things that I added to that project was to enable
a software TPM emulator. So this was the result I was looking for, except I didn't have
the option to, if I shut down the computer completely, have it then ask me for the
password when I logged in. So the way that the TPM works is it has a number of things called
platform configuration registers, which is basically where it stores like a signature
of the various things that it's looking at. And you can tell it when you're running
system decrypt in role, which of the registers it should look at to decide whether it should
give up the encryption key. But I went looking at what happens to these registers when
you reboot versus when you power down. And there's no difference. So there's no way of
saying decrypt it for me when you reboot. But if I do a cold boot, ask me for the password.
Okay. That has clarified the scary thing I was thinking is surely you've just unlocked
like at any point, like when someone steals it out of your bag. Exactly. But this in theory
should work. So this has kind of got me thinking, what's the actual use case for doing this?
Because yes, it would stop the case where someone takes the hard drive out of your computer
and then tries to decrypt it because the encryption key only lives in that TPM. And once they've
started messing around with the hardware, it's not going to give it back. They can't boot
from a USB drive and get it out. It's not going to accept that. But it doesn't stop the case where,
you know, I'm getting off the train and someone steals my bag and then they turn the computer on.
It boots decrypt. And yes, they're at the login screen and they can't actually login without my
password. But unless I'm wrong, my data's not completely secure at that point. It is decrypted
or accessible decrypted on the computer somehow. If they could find some other exploit, if there was
a bug in the lock screen that meant if they mashed the keyboard a lot, it crashed something like that.
They don't even need to do that. They could just change the init shell to bin bash. They could
decrypt the disk and have it boot straight to the root prompt running bash. So there is other things
which you can do like locking down grub and locking down your EFI config so that things can't be
edited like that. Oh, okay. Well, the only editing would be at the point when you press F10 or
whatever button triggers your grub, press down arrow, edit the line. And if you can stop them being
able to edit the line at boot time, I don't know if grub does that. Right. I'm pretty sure it's
possible that you can lock down things like that as well, which would stop them being able to do
that. But I still don't feel comfortable with the idea that I've enabled for disk encryption. But
whenever you boot the laptop, it just decrypt anyway. So I'm sort of wondering, yeah, am I missing
something here? Maybe one of our security-minded listeners might be able to enlighten me as to this,
or I wondered if one of you two might understand this better than I do. But it seems like an odd set
up to me. So I'd be interested to hear anyone's input on this. So to be clear, you haven't enabled it
on your host. No. You were fiddling with this entirely in the VM to get it working and understand
the technology, but it didn't seem to fit as you want. And so help. Yes. Yeah, I can't offer any
assistance with this because I've never used TPM for disk encryption. So I have zero experience.
Have you used it for something else? The only time I used it was when I integrated it into
QuickMU in order to get Windows 11 images to boot. That was the reason that I tampered with it. So
it was purely just to satisfy the system requirement for Windows 11. And that is where my TPM
knowledge starts and ends. Linux Matters is part of the Late Night Linux family. If you enjoy the show,
please consider supporting us and the rest of the Late Night Linux team using the PayPal or Patreon
links at LinuxMatters.sh slash support. For $5 a month on Patreon, you can enjoy an add-free
feed of our show or for $10 get access to all the Late Night Linux shows add free.
You can get in touch with us via email, show at linuxmatters.sh or chat with other listeners
in our telegram group. All the details are at linuxmatters.sh slash contact.
I've migrated to a dual GPU system in two of my workstations. So I have Radion and Nvidia
sitting in my PC's KIS SING. I do hope they're not touching. I can lead to all kinds of
electrical failure. Well, maybe they're not touching physically, but they're definitely
interacting digitally. So this you've got two graphics cards with separate outputs plugged in,
or are they doing some sort of combined processing outputty thing to a single screen?
They are not both driving displays. Let me explain. So this all started when I used to have
an RTX 3090 in my main workstation, and it's a fantastic GPU, but it has one considerable drawback
in that it has 24GB of video memory, and half of that video memory is on the rear of the card,
and the heat that that memory creates is dissipated by a metal backplate. And that metal backplate
was 99 degrees Celsius at all times. And that backplate is also adjacent to the fan in the case
that pushes air out of the case. But what it's actually doing is it's blowing superheated air
at the radiator for the CPU water cooling, and it was turning that water radiator into a space heater,
and the direction of air that is exhaust from the case was at me. So what that meant was,
is that I was permanently being blasted with not just warm, but considerably hot air
during the summer months, that was just intolerable. So as much as I like the GPU,
I thought I've got to find a better way of doing things, and amazingly using two GPUs is actually
the solution. So what I've done is I took the 3090 out, which is an over triple slot card,
so it takes up like half of the available space in the in the cut physically, it's huge. So I
took that out, and I replaced that with a Radian RX 6700 XT, which is a dual slot card, and it's
what you'd call a mid-range GPU, I imagine. And then with the space that made in the case, I was
able to reorder the other cards on the motherboard, and free up a single slot space on the motherboard,
and in there I added an Nvidia T1000 GPU, and these are rather dinky. In fact, it's a single slot
GPU, and it only takes power from the bus, which means technically it can only pull 75 watts.
That's the maximum that the PCI slot can deliver, and actually the card uses way, way less than that.
So by doing this, the RX 39T at idle would use about 40 watts of power, which is not too bad for a
GPU, but under load, it would get up to like 365 watts, it's an absolute power pig, you know, in that
regard. But it doesn't matter whether it's idle or going full bore, that backplate is 99 degrees
all the time. That was the main problem. Was it not possible to just like turn the case
90 degrees and face the other way and blow the hot air out the door or something, it had to go.
When it did have to go, it didn't really matter that it was blowing it at me, that heat had to
go into this room in some way or other. It would find you, yes. Using the Radion 6700 alongside
the Nvidia T1000 has brought the power consumption down considerably. The two together under idle
conditions, the Radion uses about 30 watts of power when it's just moving the desktop around.
And as best as I can tell, the Nvidia T1000 uses between 4 to 5 watts when idle, because it's really
not doing anything, I only have the displays plugged into the Radion GPU. This Nvidia GPU is purely
for compute, and I'll get to how I use it in just a moment. And when the system is under load,
let's imagine I am game streaming, so playing games and streaming that all with OBS.
The Radion is using about 190 watts to basically composite OBS and play the game,
and then the Nvidia GPU is just being used for the compute to do the encoding of the video stream
that gets sent to Twitch or wherever. I've seen other people suggest having two GPUs, and in fact,
I've seen some people online who profess to be expert at OBS suggesting that this was actually
not a good thing, and you should absolutely not put two GPUs in a machine, but you should just
throw one big GPU at it. And so it's interesting to hear your experience of it being good and
performant and not hot. Yes, so it works very well in as much that now I'm getting considerable
power saving. So under load, this new configuration is using about 150 watts less than the 3090,
and the temperatures are way down. Both those GPUs sit around 50 to 60 degrees depending on the
load that they're under, so it isn't generating that same volume of heat into the case and the
room around me. And are there any complications with installing the, I assume that you're using
the vendor supplied drivers for both of these or are you using the open source drivers for AMD?
And yeah, what's it like having both of those installed and managing that?
That's an excellent question, because that's actually sort of the secret source in making this
all work, and kind of goes to Alan's point about like people recommend not doing this. I imagine
maybe those people aren't running Linux, where this, where Linux kind of shines at this particular
sort of use case. So I'm just using the regular drivers to run the radian stuff. So I don't use the
AMD GPU Pro drivers or whatever they're called. I'm just using the open source drivers plus the
firmware that you get with the Linux firmware bundle. And that means that, you know,
Wayland and all of that stuff works, including video acceleration, hardware, encoding,
and what have is available on the radian GPU, but I'm choosing not to use it on the Nvidia side,
just using the Nvidia proprietary drivers. But the important step here is on a Ubuntu, there's
a meta package for the Nvidia drivers, which has dash headless in the name. And effectively,
that includes all of the Nvidia drivers except the display server drivers, so no exorg drivers.
And so that enables things like CUDA, and NVANK, and all the compute capabilities,
but it has no facility to drive displays at all. Right. So when you run those two side-by-side,
you now get the full compute capability of an Nvidia GPU, but none of the display output,
and that's also what helps keep the power draw of that Nvidia GPU down, because actually driving
the displays is what actually pumps a load of voltage through the GPU in order to drive those
displays. And I was talking about temperatures and power consumption earlier. I'm able to measure
that with NV Top, so it's a little command line love for you here. NV Top's been around for ages,
the NVV is a clue that it was an Nvidia tool, but it recently added support for multi-GPUs.
So when I run NV Top now, it's a stacked display, and I can see all of the metrics for both GPUs,
what's running on them and all the rest of it. So with this configured, and I've run this
configuration on Ubuntu, and I'm now running it on NIXOS. On NIXOS, it's a slightly different
configuration in that you just tell your NIXOS configuration that you're using what's called
reverse sync. Because traditionally, when you have an Nvidia GPU, it wants to be the primary,
and the other things it's subordinates, and what we're doing is we're tipping that on its head.
I want the radian GPU to be the primary, and I just want the Nvidia GPU to be the sibling,
the dumb thing that we just do compute with. And it works as well on both. It's been really stable.
I've been running this for like nine months now. It's been really great. But this means I
have all the benefits of running a radian driver on the desktop. So Wayland, if you care about those
things, will work just fine. But most importantly, all of those workloads where an Nvidia GPU is required,
for example, DaVinci Resolve. DaVinci Resolve will work even though the display driver is using
radian, and it has a requirement for CUDA. It finds the CUDA being satisfied by this other GPU,
which means you can do your effects composition on the Nvidia GPU and the video encoding on the
Nvidia GPU all seamlessly. And the same is true on OBS Studio. Everything's composited with the
radian, but then the Nvidia GPU is used for all of the hardware encoding, but you can turn all of
the quality settings up to 11 on the Nvidia GPU, because all it's doing is that encoding piece.
So you get no penalty of your game performance, where the Nvidia GPU can sometimes take too much
when it's doing the video encoding away from the game. It's interesting. I have a
not quite as complicated, but similar setup of multi GPUs in the the knuck that is on my desk here,
which has an inbuilt Intel CPU, an AMD GPU, and externally an E GPU, which is an Nvidia card,
but I'm using them the other way around the traditional way. The Nvidia is driving the displays
and the AMD is for whatever else I can use it for. But yeah, it's interesting that you can actually
use both the GPUs at the same time with both drivers loaded, and it works fine on Linux and on Windows.
Right. So you've used it with Windows and Linux quite happy. Yeah. Yeah. I mean, maybe some people
have an experimented with this in other parts of the world, because they live in places where
air conditioning is ubiquitous, and they wouldn't run into this particular, you know, climate issue.
Or they live in Norway, where it's just naturally gold. Yeah. Exactly. And I also have an Intel
Arc GPU. And what I'm going to be looking at next is how I can potentially use the Intel Arc GPU
in a similar configuration. So maybe use Intel Arc as the primary with Nvidia alongside it,
or maybe in my test workstation, all three GPUs at the same time, and see what madness we can
cook up there. But yeah, it's been a great configuration. So if you have got mixed workloads,
dual GPU setups on Linux, work a treat, and these T series cards room video, single slot,
bus powered, not tons of CUDA performance, something around the sort of 1050 Ti sort of region.
But in terms of their video encoding performance, exactly the same as a 3090. So pretty great.
I have a further update to what I talked about in episode 10. And a small reminder is that last time I
downloaded some historical EV data, charging data from BMW, the manufacturer of the car.
And I uploaded it to Axiom, my employer, to build a dashboard so I could see some detail about
the different types of places where I've charged and how frequently I use my home charger and other
charges. So that's what I talked about in the last episode. Go back for a refresher to listen to
episode 10 for that. But the problem with that is I only had the historical data and I've had the
car for 18 months and I could download a snapshot of that 18 months, but I couldn't use that to get
ongoing data because I'm still on the car and I'm still charging the car every day or so. And so I
wanted to get ongoing data. And BMW has an API for getting that car data. And I tried to register.
They have a service called AOS, which is after sales online system. And I got rejected.
I applied for access. And they said, nine. So I said, please, it's my car. And I would like access to
the data around my car. And I got redirected to another department who also said, no,
because, and I quote, I do not fit to be a publisher of technical information. So what I think it is,
is it's designed for app developers or people who work in the automotive industry who want to
integrate with the car system in some way. And I kind of moaned a little bit on mastodon. And
then I did a bit of googling and actually found a tool that helped me. And it's called Bimmer
Connect. Bimmer being the colloquial name for BMW manufactured motor vehicles. I went down a
little bit of a rabbit warrant in the UK. We tend to call them beamers, but actually beamer is
generally the term for the motorcycles made by BMW. And Bimmer is the term for the cars apparently.
And there's a different name they're using China that's got the sounds very much like
boomer, which sounds a bit like a cow or something. It's very strange. Anyway, there's a whole article
on the BMW's website about Bimmer Beamer and so on. Anyway, there's this piece of software called
Bimmer Connected. And it's open source. And it's a library to query the status of your BMW or your
Mini using the connected drive portal. And the connected drive portal is a thing that I have a
sign on for because it's the thing that the official app uses to link you as a person to your car.
And this thing is a Python library. So you could use it to query the API using your existing
username and password that you already have and the VIN of your car, the VIN being the vehicle
identification number or VIN number. And it also has a command line tool you can use to get the data.
And the command line tool is called Bimmer Connected. And all you do is run Bimmer Connected and you
pass it your username and password and the region that you're in because I think they've got multiple
endpoints for USA, China and rest of the world. And then it produces a Jason dump of data about
the car. And what data about the car you ask? Well, but this is different kind of data because
before you did like a data checkout, it was all of your data for all time. So yeah, what is this?
Is this everything again or is this something else? No, this is just a snapshot. And the snapshot
is like real time. So if you query it multiple times over a period of time, the data will change.
Well, some of the data will change. There's some of it, which is stuff that doesn't change,
physical attributes of the car, like the make and the model, the drive train, whatever enabled
capabilities that the car has, like electric windows and so on. And there's some stuff that doesn't
change very often, like software versions, that's reported in there as well. It's just one big,
big Jason file. What else is in there? The charge schedule. So if you set it to charge at certain
times, that's in there. The status of the doors and the windows and the sunroof, whether they're closed
or open, which is good from a security point of view. But the stuff I actually wanted is also in
there. And the stuff I wanted was the mileage, the charge level, the range, and the latitude,
longsued and heading of the car. So I can tell where it is, what the charge level is, and how many
miles have done. And which way it's pointing. And which way it's pointing, yes, which is very
helpful. I think the reason why they put that in there is in the app, it shows a little picture of
your car and it actually does show it, which way the car is pointing on a map, which is quite cute.
I don't know why that's useful, but it is. So I wrote a five line shell script, which calls Bimmer
connected with all my credentials, which dumps out the Jason. And then I just throw that at
axiom using curl using our API. And I do that. I was doing it every minute, but then realized that
was a little bit excessive to keep poking it going, where's my car? Where's my car? Where's my car?
Every 60 seconds, especially given when I looked at the data I zoomed in on the dashboard that I
built in axiom. And I could see that even if I poke the API every 60 seconds, it only actually updates
every five minutes. So I think my car only reports status every five minutes. And so I dialed back
my script so that it goes to sleep for five minutes and then pokes the API again. And so now I have
the historical data and I have ongoing data showing charge level. It doesn't quite have all the
information that I could get from the data dump. It doesn't have like the street address of the
charger where it's currently sat, but it does have latitude and longitude. And I can calculate
if the, you know, the car was at a certain spot and the amount of charge went up, then I could
log that somehow. So I can use this information. It's just not quite as nicely formatted. But I
could also once a month do a takeout and get that historical data again. And I'll put all of this
in a follow up blog post to the last one. And that one will be in the show notes. But I just
thought I'd mention that I've managed to wrap this whole thing together with the takeout and
Bimmer connected. And thank you to all the wonderful people who've written and maintained that
Bimmer connected bit of Python. Well, open source will find a way. Yeah, certainly does.
As will a dodgy show script running on a server in my house.