Details
Joined devRant on 3/26/2017
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
-
Yes.
-
DevRant became full of newbies with newbie issues, like:
- SO downvoted my question
- most people still use [existing technology], but I want to work with [hype] (alternatively: [insert tech] works so well for all my simple pet projects, companies are dumb for not using it more)
- haha floating point arithmetic and type coercion sucks, look how smart I am
- yes I am [insert group] and I'm a software developer, look at me
- plz help me with this basic issue, I swear I'm trying
- something broke when I did X and it's definitely [insert corporation] fault -
@lungdart There's an important difference between accident and negligence.
-
@usr--2ndry You're right, but that was a rhetorical question and the answer is obvious from the context. The point is that due to all those names being included in the UA header for the sake of backwards compatibility, the original information which was supposed to be contained in the UA header became obfuscated to the point where it no longer serves its true purpose. Therefore it would make hardly and difference if all the browsers just sent some generic UA header to satisfy legacy software, while providing the relevant info about the browser using some other method.
-
UA header is bullshit anyway. It was only ever meant as a way for clients to introduce themselves, e.g. for debugging purposes. But of course people came up with countless "clever" hacks they could perform with that data, and of course those hacks broke as soon as a new version of a browser came out since no one had the time to maintain those hacks. And then browser developers decided to break it even further by mimicking the UA header to get around those broken hacks.
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36"
Wtf does that even mean? Why does a single UA header include the name of every major browser out there? -
@IntrusionCM That's the problem - software developers work hard to make the best product they can, but it doesn't fucking matter because the actual distros and vendors who provide the software to the end user just skip a bunch of "optional" features. And if you do a quick google search, you learn that you need to install some additional libraries to get the full functionality (https://stackoverflow.com/questions...); however, most users just never get to that point and instead assume things don't work.
-
@Parzi When was the last time you checked? 8 years ago? Because ImageMagick 6 was perfectly capable of handling webp back in 2016. Android had support for webp at least since version 4.3, which came out in 2012. Apple added support for webp on iOS 14, released in 2020.
-
@retoor Using WEBP for a few high-res images will save you more than 1MB of bandwidth. Although that won't improve the initial page load time by that much.
-
I remember trying to use Nx / Nrwl thing once, and within the first 15 minutes I was browsing unresolved GitHub issues to at least figure what I was doing wrong and why the provided boilerplate didn't start. In the end I gave up and went with Yarn (2+) workspaces for the monorepo.
-
It's mostly confirmation bias at this point. No one really tracks how many bugs they had before unit tests vs after, or how much time they spent on those bugs vs on writing tests. No one tracks how many times they got false errors from unit tests. But everyone notices when a test catches some edge case, because that's what they're expecting to happen with unit tests.
-
Nah, I just remote connect to home computer and push it...
-
All these things have never been sustainable in the first place and are finally getting the reality check. On top of that, while the price-per-unit has gone down, the demand has grown way faster (take YouTube: full HD videos consume ~4GB per hour of content which was $0.24 in 2010, 4k videos consume ~28GB per hour of content which is $0.45 today). The gap is even greater for network transfer: not only does the same content consume several times more data, there are also way more users requesting that content. And of course while cloud infrastructure for start-ups and small services is way cheaper than on-prem, it quickly outgrows the costs of on-prem infrastructure as the business grows and the hardware requirements become more sustained and predictable.
A lot of those businesses were funded by investment funds and shareholders who never gave a fuck about the actual product and business model as long as they got the money, and now they're desperately trying to "fix" their ignorance. -
I've yet to see a CMS where that would be possible / practical. Basically with CMS you have a number of things that need to be versioned separately:
- Core CMS; that's probably the easiest part, since it already comes with versioning. Just put the selected version in the deploy file and add it to your git repository.
- Add-ons; those also come with their own versioning, so you can create a lockfile to store the exact version of each add-on you use, and keep that file under git (along with any custom add-on code and config files).
- Configuration; here's where things become problematic. CMSs usually store most of their configuration in the database along with any user-generated content. Since configuration usually depends on what version of CMS and add-ons you're using, this pretty much prevents you from doing any kind of automatic rollbacks; you need to manually cherry-pick the relevant data from the DB so it can be restored on rollback (assuming users won't change it through CMS. -
@iceb JSON doesn't allow trailing commas.
-
Adds like these are what companies do when they want to stand out from the competition without actually offering anything more to their employees.
-
WordPress is a content management system for simple websites and blogs. You don't need advanced user management for that, but you do need a powerful content editor.
If you're using WordPress for something where you need to import / export users, sincerely, fuck you and your abomination of a website. -
There is no real solution. Human brain is limited, so it's near-impossible to keep a number of complex secrets there. Computers are good at brute forcing simple secrets, therefore we need some form of complex secrets. Storing complex secrets on dedicated devices works, but there's no real backup if something goes wrong with such devices. If you want to use backup devices you need to have them with you whenever you need to store a new secret, but they don't provide much backup if you keep them together.
Best you can do is try to combine multiple solutions to find adequate balance between security and robustness, but that doesn't really work when different users have different needs. -
@hjk101 First of all the article never states that Google did it on purpose, so stop lying.
Second, at the time the article has been published (mid 2018) Chrome already had v0 support for over 4 years, while Firefox and Edge only just added support for v1 a few months prior, and neither of those browsers implemented the v0 API at all despite having plenty of time to do so before v1 API has been established.
Third, when Google started working on the new YouTube interface (presumably a year or so prior), neither Firefox nor Edge supported any web components APIs. The v1 API has been just released as a preview, while the v0 already had a lot of traction and an established UI framework. So it's obvious that Google used the v0 because it was a more mature ecosystem at the time, not because it would make other browsers slow.
And if Firefox and Edge implemented the v0 API sometime before v0 got deprecated, there would be no problem with any of that. -
@hjk101 That's a weird way of saying that things are slower due to Firefox sucking at implementing new APIs and technologies as they become available in other browsers.
-
What's happening is those search engines are switching to GPT-like LLM for data retrieval. Which basically means they're becoming useless for anything specific because a) the more specific a topic, the less training data there is, and b) because LLMs can't really distinguish between a highly specific search term and other similar search terms unless specifically trained to do so.
-
@webketje The commit is from February this year, therefore everything you said is completely irrelevant. They could literary just do "if (val instanceof Uint8Array) return val;" and everything would work perfectly fine. But nooooo, gotta make an overengineered piece of crap and force users to mess around with pointless hacks to get it working. And why bother properly explaining such a weird decision on a massively popular project with hundreds of contributors and official support from one of the largest software foundations?
-
My designer decided the "submit" button on a form should be touching the last input field (basically making in look like the button is coming from under the input field). Sure, whatever.
But what about form validation? There's no way to show the error message under the input field without messing with that submit button. Their response? Just don't show validation error.
Because god forbid a UX designer would consider multiple potential states and scenarios when designing the UX to prevent problems like that. -
Just learned that the live endpoint only accepts requests from whitelisted IPs. Guess how many times they mention that in the documentation? Zero.
Well, just ask them to whitelist the static IPs, right? Wrong! They allow only one IP per customer, and I need at least 3 because the servers are in multiple availability zones.
Oh, and did I mention the API uses HTTPS with client certificates? So proxying the requests through some static IP probably isn't going to work without some highly unsafe hacks... -
"The colour is different due to colour changes" - yeah no shit, everyone can see that. Doesn't mean that much though when the colour depends on a number of factors beyond anyone's control.
-
@thebiochemic Multiple research papers have shown that the relationship between the programming language and the quality of the software is minimal, if any (see fse, cacm). While certain languages are somewhat faster when comparing individual algorithms, most of the "hard work" is done by the underlying low-level libraries, and the remaining few percent have barely any impact on the overall performance of the actual software.
It's obvious you prefer software in Rust, and you're just trying to justify the whole cult around it, thus proving the whole point about Rust being overhyped. -
People aren't taking shit on the software you're using, people are taking shit on you because you have no fucking clue how any of the software you're using works and what it does, and you never take the fucking time to read the manual and follow a few tutorials first.
-
@thebiochemic So you need to know the language before you even start using the software in case it turns out to be slow / buggy? Or do you claim that the correlation between the language and the quality of the software is somehow significant and more important than everything else that goes into developing the software?
-
@thebiochemic Does your bread say "made with XYZ industrial dough mixer"? If some brand of bread printed that info all over the packaging, would you buy it?
-
If the first thing you need to tell me about the software is what language it's written in, there are only two options: either it is a library for that language, or it is an overhyped bullshit.
-
First of all, you're not only writing code, you're also reading it. In fact, you only write the code once, but you (and probably other people) will read it many times. And reading 30 characters just to know whether the function prints to a file stream or to some other type of output quickly becomes a major pain in the ass. The longer it takes to read the code, the harder it is to pay attention to what the code does.
Second, the more characters there are in a function name, the more likely it is to get matched for autocompletion, ultimately making autocompletion less useful.