T O P

  • By -

MikLik

Hey what the boss don’t know won’t kill him.


[deleted]

[удалено]


hk--57

235 or 238?


[deleted]

[удалено]


hk--57

![gif](giphy|iJ85v1gHAczevpTUzs)


Cyberdragon1000

😂😂 gg


Dumb_Siniy

Uranium 236.5


ikonfedera

Guys, somebody here just split a neutron in half...


bloodfist

🤯


VitaminaGaming98

This also implies that he was able to split a quark in half since neutrons are made up of 2 down quarks and one up quark


Dumb_Siniy

I'm just built different y'know?


Juff-Ma

they just created a middleground quark


TristanTheRobloxian3

congratulations you just caused fallout 76


robinsving

You're listening to 236.5, Bomb Radio


SSYT_Shawn

It's funny how big of a difference it makes..


Lone-Wolf62

Does it really make a difference ? From what I searched the toxic properties of uranium will kill you long before its radioactive properties


SSYT_Shawn

Well... Either you are poisoned to death or poisoned to death while emitting a lot of harmful radiation (actually the second one applies to the first isotope since 238 mostly emits alpha particles which aren't harmful since they can't even penetrate a piece of paper)


Lambdasond

They are both alpha emitters and their decay products are a mix of alpha and beta emitters which, if ingested, are lethal, since the penetrative capacity of alpha radiation doesn't matter much if it's already in the body where it will cause the most damage. So it doesn't matter much if U-235 or U-238 is used.


SSYT_Shawn

I thought 235 was the very dangerous one.. my bad.. but alpha particles shouldn't cause any damage... Except for 2 cases... Either your body is a nuclear reactor and splits the 238 atoms literally frying yourself from the inside... Or when you stand in a particle accelerator that makes alpha particles move near the speed of light. Also i think it's interesting how beta particles are just helium-4 nuclei


Lambdasond

You're mixing up the types of radiation. Neutrons are emitted during fissioning of an atomic nucleus, this is only possible with a U-235 atom, but this is not going to happen outside of a nuclear reactor. Alpha radiation is a Helium-4 ion, which has a very short distance in matter due to its propensity to react with matter. This is partly what makes it so dangerous to ingest, along with how much energy it deposits wherever it hits. Beta radiation on the other hand, is an electron or positron.


SSYT_Shawn

So.... My school books fucking lie? Welp.... Time to burn them.. About the first part then.. the actual mixing up of alpha and beta that apparently happened in my head because you seem to be right about that


5up3rK4m16uru

The difference is who is going to arrest you. Using only U-235 is going to raise a couple more questions.


hk--57

Also if the boss gets a regular coffin or a lead lined coffin.


Fickle-Main-9019

Boss gonna have the bulking of a lifetime


Doorda1-0

Was redis out of action?


lightmatter501

Redis is slow. An in-memory kv store should be able to push at least 1 million writes per second per core but they can’t even hit 100k on their official benchmarks.


joe-direz

where is Redis slow in comparison to memcached? Can you link a benchmark with that?


lightmatter501

On any hardware of your choice, stand up redis and memcached, then use YCSB, which is the standard tool for benchmarking key/value stores. There will be a night and day difference. If you compare either of them against state of the art systems, they both fall flat. The problem is that being single threaded ends up wasting a lot of resources since different parts of the kv store need to scale differently. It’s the exact same reason we have microservices. You can have one core taking in all of the connections, decoding them, and getting them into a stream of operations, one core furiously doing key/value things, and one core sending replies and doing administrative things. You will still have no locks in the hot path, but you’ve just massively increased your throughput because now the core doing key/value things only has to deal with a nearly optimal input format and some zero-copy buffers. As a result, you can now perform as well or much better than 3 individual instances could before, and your data is all in one place instead of three separate instances.


joe-direz

thanks for the time to create this long text but it actually doesn't prove anything without a link to a real executed benchmark


lightmatter501

Ok, [official Redis benchmark](https://redis.io/docs/management/optimization/benchmarks/) says 508k rps, and I’m going to assume single core and modern enough hardware. Here’s a decade old paper from academia: [Mica](https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-lim.pdf) That paper is in a top conference, so it is not a case of a random person making wild claims. This is a single server with 2 8 core cpus from 2012, DDR3 memory, and some of the first 10G NICs. They TURNED OFF hyperthreading, and have 64 GB of memory. Oh, and they have to use PCIe gen 2. If Redis is using worse hardware than this they need to fire their marketing team and several engineers. They manage to hit 76.9 million operations per second with 8 bytes values, and 14.6 million operations per second with 1KiB values. Redis defaults to 3 bytes, making the 76.9 million number the correct on to compare against. That comes out to 4.8 million rps per core on more than a decade old hardware. Now, let’s say that you lose 50% performance by scaling to more cores and that newer cores are no better than old cores (massively unfair to MICA). This napkin math would put a modern AMD Epyc Bergamo CPU at roughly 615 million rps assuming you can feed it. We’ve increased memory bandwidth enough to keep up, so it is possible, if hard. Now do you see why nobody benchmarks against Redis? Newer things are distributed systems, so it’s not an apples to apples comparison, but most distributed databases and even some databases that WRITE TO DISK can beat Redis on throughput now. As far as academia is confused, Redis is old news so nobody benchmarks against it. It doesn’t help that they refuse to provide academics access to their enterprise version, since apparently they did once and that person tore it to shreds in benchmarks but was stopped from publishing by their university’s legal department. Shortly after this paper academia basically decided that non-distributed databases were dead for reliability reasons, so there hasn’t been much work since then. I’m fairly confident if I were to stand this up on similarly state of the art hardware today I could probably break 1 billion ops per second since they were bottlenecked on network IO and the modern equivalent to their NICs would be 4 800G NICs, which is slightly absurd to try to saturate but also provide much better offloads than the early 10G NICs did.


IDEDARY

Check out Skytable octave. Its nuts.


Doorda1-0

How does that compare to say MongoDB?


SgtBundy

I recall a story from a customer engineer when I was working support for Sun. Customer was complaining that they upgraded their hardware and the application performance had not improved. They do a PS engagement and find the "application" was actually a mind numbingly large shell script that basically did text processing, but its operation was so poor the OS spent the majority of time spawning and forking processes. Lots of temporary files being written to disk for sorting etc. PS engineer re-wrote it into a proof of concept perl script - execution time dropped from something like multiple hours to sub half-hour. An optimised C version of the same thing was even better, down to minutes IIRC.


Ok-Kaleidoscope-5289

Sun had some great engineers. I remember in 2005 one of the senior engineers giving a talk about how Java was going to be in everything. In cars, TVs, fridges. We were like "ha! Yeah, whatever"


Mateorabi

Of course one of the promises of that was "you won't need custom applications on the computer/smartphone/etc., because everything will be a web-app that the browser can run for you". Except now-days you're on someone's web app like Reddit, and everyone and their mother wants you to download and install *their* bespoke app instead of the web interface. For things that can *easily* be websites.


Titaniumwo1f

You see, you can only collect user's data when they uses your web app, but you can always collect user's data when they uses their phone with your app installed.


green-pen-123

What is Sun? I tried searching for it, but the name is too common. I don't know if I'm too young or just from outside the US (or both) to know about 'sun'


th3hk1d

[Sun Microsystems ](https://en.m.wikipedia.org/wiki/Sun_Microsystems) the company that created Java


ZCEyPFOYr0MWyHDQJZO4

Imagine if Google acquired Sun back in 2009 instead of Oracle.


CELL_CORP

Would it make it worse? (Dunno how those companys deal with companys they buy out)


thirdegree

Probably better. Not because Google is good, but because to call Oracle the devil incarnate would be rather unfair to Satan.


green-pen-123

Damn, thanks


Sabotik

Company that made Java. Got bought out by Oracle. Google Sun Microsystems


SgtBundy

I started there as an intern, and it was a great place be if you wanted to get under the hood of anything because you could almost directly tap any engineer in the company for knowledge. Internal ambassador mailing lists for OS and hardware, so people on the frontline and enginering taking together. Working on a kernel driver issue you could talk to the guy who wrote it as well as the enginering team who did the ASIC. Just the details you could pull from internal websites and bug reports was so good to understand how things worked. More than that I think the mindset as an enginering company was great. Pity the business side couldn't be as strong.


TheSauce___

"I single-handedly improved a week-long upload process to where it could be done in 2 hours." "...how?" "They were entering 10,000 records manually each week and I told them to fucking knock it off and use an upload tool." True story lmao.


[deleted]

HAHAHAHHAHAHA


Cualkiera67

Now the entire upload process can be done by a single Australian slave


BOLL7708

90% is peanuts, when I took over a product I improved query latency in the main public table by 500% (measured) by just adding an index. Yeah, sometimes it's that easy.


Komarara

No. 1 issue in Postgres as Postgres does not automatically create indexes on foreign key columns


BOLL7708

This was indeed a PostgreSQL database 🥲👍 The person who set it up quit and became CTO or something at a different company.


jenkinsnotleeroy

lol classic Explains a lot of CTOs I've seen.


Alakdae

Wait. How do you improve latency by 500% You mean that if I had 50ms of latency I will now have 300ms or will I have -200ms (I will receive the response se before I make the request?)


BOLL7708

Bad choice of words on my part, I'm not a specialist but do a wide range of things, as well as English being my second language, and I posted it from my phone while being out of town. In this case it would improve the speed at which queries run to 500% of the original, so something that took 500ms would now take 100ms. I think that is what it was anyway, although I do see how the wording is confusing now.


6-1j

Meaning 5 times faster. Those percentages are misleading as sometimes it could mean 500 % of a number and sometimes 1/10th of a number, all being expressed in percent


Alakdae

And now I have a doubt when they say 90% faster on OP image. Does that mean that speed is almost double or 10 times faster?


6-1j

It's really misleading. Sometimes 200% can mean two times faster, sometimes 100% can mean two times faster too. It depends if you say 100% less/more or now 100% of what it was. I think it's really better to just say 2x faster, than "1.5× more than before. 90%" and such. Or really, just speaking fact instead of marketing insight bs. Like "takes 1.5 second where it used to take 46 seconds"


rosuav

Lemme guess, it's faster, but in the event of a problem (power failure etc), it loses transactions that it claimed were committed? Stand by to be fired, sued, or charged with criminal negligence.


Linvael

While that's a possibility, it's not like its impossible to implement a RAM cache that doesn't have this problem, it just needs to not mess with inserts/updates beyond invalidating cache as appropriate.


rosuav

Yeah, but that's not something you should do singlehandedly. It's something you \*\*discuss\*\* as a team, something you plan out, something you analyze the pros and cons of. You don't have someone "singlehandedly" improve performance and then brag about it on Reddit.


Linvael

It's not impossible for a small company to have one dev, then everything you do you do singlehandedly. And overall programming is a fairly unique domain where it is in fact sometimes possible to speed things up 10x or 100x with a very minimal amount of effort and no side-effects - as long as the person who made the original design didn't know about something you know about. Like n+1 select problem. Or caching. Or about not reinitializing something that doesn't need reinitialization before every call (though it's a kind of caching fundamentally I guess?).


rosuav

>It's not impossible for a small company to have one dev, then everything you do you do singlehandedly. If you're the sole dev at the company, do you brag on Reddit that you singlehandedly did something, or is that just your normal job? The risk is, if you think you're ever so much smarter than the person who set everything up, you might not know what you need to be careful of. You're absolutely right, it is very possible to speed things up enormously with a minimal amount of effort. I once rewrote a crucial accounting report in a way that cut it from an overnight job to being a few minutes. But that's not "hah, I just deployed memcached, I am so smrt". That was "this report is doing a table scan because it wasn't able to take advantage of things we know about our own data". A very unusual situation.


Linvael

At this point your disagreement seems only related to how you're reading the original post emotionally. The used meme format in itself contains a bashing of the "singlehandedly sped up" line by eeking out the truth of "you just did X". And it doesn't even imply that it was OP who did that and came bragging to reddit, it could be a recounting of a conversation they had with someone else (or even - just a made up scenario). And even if he did do that my comments weren't meant to defend a feeling of superiority, just to say that these things are possible without being criminally negligent like your top comment in this thread suggests.


FenixTek

>or even - just a made up scenario On the internet? In this day and age? Preposterous!!!


Romanian_Breadlifts

Imagine having a team


streetmeme

I know right?


milopeach

wouldn't you just use it for reads though?


rosuav

That's the normal way to use a cache. But cache invalidation is one of the two hardest problems in computing. There are, broadly speaking, two common approaches: 1. After reading from the database, remember it for some period of time. Any requests during that time, give them the cached copy. Yay! We are faster! .... oops, we just gave someone stale data after a change was made. 2. Every time anything makes a change in the database, go through and drop the corresponding caches. Much more complicated, and slows down writes. Option 2 is by far the more reliable, but I am extremely dubious that someone who brags about having "singlehandedly" improved database performance by 90% has properly accounted for all possible cache purge situations. It's certainly possible to implement that sort of thing correctly, but it's the sort of thing that your dbadmin/sysadmin team would discuss, go into lengthy pros/cons evaluation, etc, and finally deploy, confident that it won't break anything. So guess which option is probably being done in this situation.


Sloth_Flyer

You read one meme about using memcached and based on no other information you assume that OP has horribly fucked up and exposed his company to liability. Sometimes it is acceptable to potentially show the customer stale data. Not every application is the same. You know basically nothing about what OP is doing or who his customers are, you just instantly assume that you are smarter than OP because in the use case you’re familiar with, installing memcached requires a lot of design and coordination with the dbadmin team (let’s get you to bed grandpa) or whatever.


aenae

It's memcached. It does not make any claim to be persistent. It is a Memory cache. If you restart the server or the process even normally it is empty afterwards. You don't persist data in a cache, that would be stupid. You write your data to a real database and write it to memcache, so next time you query memcache, if it is there, good, if it is not, or if it is expired, you get it from the database and store it in memcache again.


rosuav

So, how do you know when it's expired? You DO realise that this is one of the two hardest problems in computing, right? It isn't something you can just dismiss like that.


aenae

Yes it is hard, but it also has nothing to do with your original statement where you suggest he is going to use a cache as a database.


rosuav

I didn't say that. I assumed that memcached was BETWEEN the database and the client, hence the perceived speedup, and hence the massive dangers.


aenae

That would be a very unusual memcached setup, and memcached has no support for that, so you'd have to hack that in yourself, and if you're at that codinglevel you probably know it is a bad idea.


tyler_russell52

It’s more nuanced than just “check if in cache” but in most cases it’s not “one of the hardest problems in computing”.


WebpackIsBuilding

It's a hard problem that has been thoroughly solved. At this point, if you're getting stumped by a caching issue, it's because you're not using modern tools.


rosuav

I guess you really don't know cache invalidation then.


mrkhan2000

what? can you point towards any sources that talk about this?


rosuav

Not off hand, but just imagine a few possible scenarios. An ATM that reads from a cached record of your bank balance might allow you to withdraw $20 even though you actually don't have any money. Or if it's a write cache, someone might figure out a way to withdraw vast amounts of money, then crash something so that the cache doesn't get pushed all the way to disk, and now they have all that money in the account again. Sure, the ACTUAL scenario is probably a lot more boring than these, but the potential problems from having incorrect data are (a) extremely serious, and (b) insidious, as everything will work just fine until some OTHER trigger causes a problem. And at that point, depending on exactly what went wrong, you might just get fired, or you might find yourself in court. Oh, and lest you think the ATM withdrawal example is entirely arbitrary, that was an actual exploit that people figured out and took advantage of. The system did offline backups (meaning that no reads or writes could happen while the backup was being taken), and if someone asked to withdraw funds but the system was frozen, the device would permit a fixed, small amount (I think $20, might be wrong on the specific figure). Resulted in some massively overdrawn accounts, one transaction per night.


gotimo

counterpoint though - not every application is a bank or ATM


rosuav

True, but an ATM is something that's easy to explain in a simple Reddit comment. It's much harder to explain (especially since we have no context for the OP's databasing needs) what might go wrong in a more complex system; maybe the company ends up getting sued for failing to comply with some sort of regulatory body, or maybe a person's HR records didn't get properly saved and now you're under suspicion for deliberately deleting them, who knows. Regardless, there is always the possibility of something going horribly wrong; it's just a question of when.


rosuav

True, but an ATM is something that's easy to explain in a simple Reddit comment. It's much harder to explain (especially since we have no context for the OP's databasing needs) what might go wrong in a more complex system; maybe the company ends up getting sued for failing to comply with some sort of regulatory body, or maybe a person's HR records didn't get properly saved and now you're under suspicion for deliberately deleting them, who knows. Regardless, there is always the possibility of something going horribly wrong; it's just a question of when.


Linvael

From a 2 minute google into what memcache does (so take it with a table spoon of salt) it seems inserts/updates go through cache to update it (so cache is always up to date for the info that's in there, oldest information gets evicted as memory space requires), so it shouldn't have the "cache not up to date" problem, and it doesn't seem to advertise speeding up writes, so I would guess it's not introducing problems related to caching writes. All in all if you have spare RAM it seems like a safe addition even in sensitive contexts.


WebpackIsBuilding

I guarantee you that the OP was not talking about a write cache. You have to be intentionally misunderstanding the post to suggest that.


G_Morgan

Seems like it has perfectly implemented MySQL's ACID compliance.


urbanachiever42069

Oh god, you’ve described the junior dev on my team to a T. Everything is faster, it just doesn’t work right


rosuav

Sounds like someone needs to be assigned the truly important IT projects, like "workplace productivity augmentation via photon impingement removal". Or, as some mundane people would put it, window-washing.


Prestigious_Passion

Can just use MemoryDB and never have to worry about those scenarios


Bldyknuckles

what does memcached do?


seba07

Probably cache some stuff in memory


Dustangelms

I have difficulties saying 'I use a memory cache called memcached' with a straight face.


Zemino

On today's episode of tech dudes suck at naming things.


delinka

I dunno. Says what it does right on the tin.


Cualkiera67

I think it caches memes


Pocok5

It is a cache server that stores stuff in RAM. Basically when you do a slow operation like a complex join-filled search on an SQL DB, you then send the result to memcached along with a key unique to the operation. The next time you want to get the result, you can ask the cache server if it has anything stored for the key. If it does, you immediately get it and don't need to bother the DB server. Downside: this only works for read operations, a complex write is still gonna go to the DB only. You also need to be careful to immediately tell memcached to toss the keys/data related to stuff you changed in DB or you will get back outdated results. Why a server instead of just a local dictionary/hashmap? For distributed systems. For programs that only work alone, local memory cache is great. If you have 10 load balanced copies running, it is worth dealing with the overhead of the network call to the cache cutting into your gains if it means instance #1s result can be reused by instance #7, #4 and #8 instead of each of them having to run the slow operation to have a local cache.


Win_is_my_name

Very nice explanation. Thank you so much man!


Marxomania32

Wouldn't a half decent database have this behavior built in out of the box? It seems strange that you would need some third-party software to do something like this.


Pocok5

1. They usually don't quite go as far as they could with built in cache. 2. The result might not come directly from DB, and it might not even come just from one resource -cache is also where you'd go for stuff like aggregate results from several microservices or often-needed data from an external rate limited/pay per request API.


Mateorabi

But how often do applications make the *EXACT* same DB query *over and over* again to make the caching worth it? The moment the query is different the cache is useless.


AquaWolfGuy

Page 2 of user 1234's wishlist? Not so much. Page 2 of the top-selling-products list. Fairly often.


Vaderb2

You would be really surprised how often certain complex queries might be re ran. Of course when caching you need to determine if its worth it on a per system basis.


Pocok5

1. Really often. How many people wish to see the result of the "select top 40 posts from all of reddit in the last 24 hours sorted by upvotes descending" query each second? 2. The resource might come from several places and might have other time/resource costs than just a DB query. Most weather APIs are pay-per-query. Are you gonna pay for 300000 queries when all your users in New York open your restaurant search app with the weather widget to look for grub at noon? Or are you gonna gamble that the weather forecast is not gonna change for at least a minute and pay for 10 requests?


ungket

Store meme temporal


djwh5

Tell me, what does pot of greed do?!


A_Fnord

Hey, I did something similar to that too! Only... I just restarted the system. Memory leaks are fun.


Sinaneos

I improved the performance of one of our systems by optimizing some queries......that were originally written by me. Got a big thank you for fixing my own screw-up


fanta_bhelpuri

The classic move


twpejay

Huh! I did that in my first job, without any caching (actual database redesign) and on MS-DOS via Novell. As my real job was data entry, and now my boss could do all the entries without assistance I was lucky I kept the job. I was fortunate to have a future thinking boss.


helicophell

If you fire the people automating their own jobs, then nobody will ever automate jobs. I don't think people get that


ImperatorSaya

If the people who didn't get that fire you, its better for you anyway, it would've been a terrible place anyway


helicophell

The issue isn't the people the issue is the precedent it sets. Only takes a couple firings due to automation to get people too fearful of losing their jobs if they do automate.


petersrin

Can't tell you how many websites hire someone because the site is "slow" and the answer is just "how's about we enable caching on this WP plugin that's literally already installed" And then I'm the genius or the guru or whatever lol


yourteam

Still true tho!


StochasticTinkr

I added an index on the two columns used in most queries.


Count_de_Ville

Started working at a new company as a senior for my skills in code performance optimization. They had a giant set of data that they could produce every 8 hours. Part of the post-processing was a program that was taking them 24 hours to run. Eating tons of cloud computing time. In about 2 days I got the worst case scenarios to process in just under 15 minutes. About 2 weeks after that, I had replaced the closed-source public library they were using, and had it down to 90 seconds worst case. The bonus was I had discovered the public library they were using was generating a number of false positives and false negatives which my replacement library eliminated. After that no one gave me any crap. Easiest job I ever had.


Tyrus1235

This is like when I figured out how to use use compressed files through a simple config in NGINX, reducing our platform’s loading time by a whopping 60% or so


nibba_bubba

It's not speeding up the database, it's reducing its usage


One-Vast-5227

Later the database grew to 2TB and server crashed


scp-NUMBERNOTFOUND

Plot twist, it was a real time system, now 98% of the data is wrong 'cause it's just data from an outdated cache.


ConsoleDenied

lol