![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/028151d2-3692-416d-a8eb-9d3d4cc18b41.png)
Looks like Debian and Ubuntu have shipped patches, but I’m not seeing them show up in the RHEL-derivatives just yet, but I’m sure that’ll be soon™.
Looks like Debian and Ubuntu have shipped patches, but I’m not seeing them show up in the RHEL-derivatives just yet, but I’m sure that’ll be soon™.
LTT
Fair enough; I haven’t watched LTT in a long, long time since his tech clown gimmick just irritates me, but he definitely used to be all over his sponsors, and not in a good way.
Honestly it feels like they’re trying to get away from being just a file sync platform, and are pushing for more corpo feature sets to compete with gsuite or O365.
Which I mean is great: that’s exactly what I needed and why I use it - it let me ditch almost all of my Google services and move it all to selfhosted.
But I bet it also causes incentives to prioritize fixes and features that are focused on that, and pushes stuff like ‘make the android sync app work like every other file sync app in history’ to the bottom of the list.
Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.
So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.
If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.
Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?
You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.
Just tested that and uh, yeah, what the hell? Not something my workflows need, but that’s a shocking oversight considering damn near everything else 100% does that.
Yeah, no shit. Even my local news which is a top-10 market and has actual money to spend has half of it’s shit sourced from fucking Facebook and Twitter. The amount of ‘a thing happened today!’ that’s fucking instagram video is just amazing.
Can’t even afford to send someone out with a camera to take a picture anymore.
That’s the gotcha that can bite you: if you’re sharing internal and external sites via a split horizon nginx config, and it’s accessible over the public internet, then the actual IP defined in DNS doesn’t actually matter.
If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it’s got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:
curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k
Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it’s an edge case that’s easy to work against by ACLs, and you probably should when doing split horizon configurations.
Ugh, not the best marketing for Nextcloud to have a public share not work, lol. It seems like 25% of people just can’t see them but they work for everyone else so who knows.
Anyway, have a pastebin instead: https://pastebin.com/zPyvgxYX
Not saying you’re wrong, but what doesn’t work right? I haven’t noticed any behavior that seems wrong to me. Usually interact with nextcloud via the nextcloud section that gets added by the client in the file picker/file manager on the OnePlus Nord I’m using.
I also don’t think LTT ever calls anyone out about anything, ever. He’s made noises about ethics and sponsorship, but he’s never actively gone after a big sponsor, except maybe kinda ASUS and I’d bet that’s because everyone else did and he thought it’d be a bad look if he didn’t.
You mentioned nVidia, and I’ll mention what happened to Hardware Unboxed when they didn’t toe the marketing line. Sure, sure, they “apologized” after public outcry, but the point is they absolutely went after someone who didn’t stay in line with what they wanted the message to be.
Yep, I’m also in the dev’s matrix chat and it’s pretty much been cycling through working and broken for pretty much everyone :(
They outsourced the writing to people making literal pennies, and are writing on a couple of facts provided by someone who is more local.
It’s not QUITE as bad as having ChatGPT just make shit up, but it’s not too far from it.
I’d wager walmart is listed because they weren’t paying their floor rent for all the redboxes they have at walmarts, not that they were buying DVDs there.
Happy to share the docker-compose.yml I’m using for my setup. It includes OnlyOffice so that I can edit files internally, Google Dcos style. You can skip that section and remove the oonet network definition if you don’t need/want it. You’ll want to change the volume mount paths (or define volumes if you’d rather not use bind mounts) and change the ‘supersecretpasswordhere’ to something actually uh, secure.
Anyway, file is at https://thecloud.home.uncomfortable.business/s/32HoxHajW33PRbf
Agreed. I’d say that, if you have the option, then the libre option is the one you should pick whenever you can. But, realistically, software is a hammer, and you should pick the hammer that does what you want, and ignore the internet hollering that you’re somehow impure if you use even a single piece of proprietary software.
One thing to be careful of that I don’t see mentioned is you need to setup ACLs for any local-only services that are accessible via a web server that’s public.
If you’re using the standard name-based hosting in say, nginx, and set up two domains publicsite.mydomain.com and secret.local.mydomain.com, anyone who figures out what the name of your private site is can simply use curl with a Host: header and request the internal one if you haven’t put up some ACLs to prevent it from being accessed.
You’d want to use an allow/deny configuration to limit the blowback, something like
allow internal.ip.block.here/24; deny all;
in your server block so that local clients can request it, but everyone else gets told to fuck off.
Interesting, but yeah, entirely expected. Non-native code performance has always been worse, and the more complicated the app the shittier the performance. Same thing happened on the M1’s release for any Intel-native games on OS X, too.
I’ll be the contrary one: I tried a lot of things and ended up, eventually, going back to Nextclolud, simply because it’s extendable and can add more shit to do things as you need it.
File sync and images may be all you need now, but let’s say in the future you want to dump Google Docs, or add calendar and contact syncing, or notes, or to do lists, or hosting your own bookmark sync app, or integrating webmail, or…
It’s got a lot of flaws, to be sure, but the ability to make it essentially do every task you might want cloud syncing with to at least a level of ‘good enough’, has pretty much kept me on it.
Sequel yes, more DLC for the first one, no.
Because most poeple don’t care and just want to play the latest $GAME_NAME_HERE?
And I mean, Nintendo has already sued people into essential slavery and nobody said shit, so I don’t know what the fuck will get people’s attention.