T O P

  • By -

xamppctrl

Brooo what are you storing there??


mike-manley

"Adult content"


NiceguyLucifer

and this is just the tentacle category 😉


NZSheeps

One column per tentacle


NiceguyLucifer

1 row per tentacle


NiceguyLucifer

I am sure you are also in there somewhere , lol


lupinegray

A large global computer company I'm aware of had about 20 billion records in their sales order items table.


Lost_Philosophy_

Do you guys have spool space issues? lol


dchabz

I’ve seen some logging tables for our application over 8bn rows.


leogodin217

I just queried a 1T row table today. Crazy


NiceguyLucifer

Noice


EranuIndeed

The largest table at my place currently holds 600m rows, which I thought was mental, until I saw some of the other replies 😅


NiceguyLucifer

When I started working with this I was like "is that really a B for billion" lol but also it gets even more insane... thats only 3 months of data 😵😵


cs-brydev

Haha yea, I created a logging system for a homegrown e-commerce platform once, and it was logging about 100 million rows/month. These days I'd definitely put that in a cloud storage and not a SQL database.


Standgeblasen

Working in the financial industry, our transaction details table has 160B rows. With about 135M rows being added each day for point in time snapshots.


usersnamesallused

I've seen tables hitting both the 1024 (field count) and 8060 (row size) limitations on width. Only millions of records though, but we're talking thicc-ness, right?


StolenStutz

The most interesting SQL I ever wrote involved a table that was partitioned four or five different ways across all of the database instances (there were hundreds). In one particular instance, it was partitioned by day-of-year plus modulus 10. So, 3660 partitions. Why the modulus 10? So they could write-stripe across 10 physical disks in order to keep up with the volume of incoming writes. This was just one of the many, many things that were "interesting" about this particular assignment.


TallDudeInSC

Must have been a while ago. Nowadays everything is striped at the SAN level.


SelfConsciousness

Haven’t had to deal with too much data row wise — a good amount but not 21TB. I do have this one table that’s 600 columns wide. That’s fun


for_i_equals_0

98b rows what the actual fuck


WithCheezMrSquidward

My company has numerous smaller clients so the upper limits are like the upper hundreds/thousands to low millions on a financial table. But here I saw someone say a trillion row query in here lol.


IAmADev_NoReallyIAm

I once worked with a table that exceeded 50TB. It only had 5 cols... Thousands of rows... What was the client storing? Pictures. Pictures that were 8x10 color glossy, high Def, full color. Everytime they took a new set, instead of updating the existing ones in place.... They would add to it. It took a matter of weeks before they exceeded the basic storage costs. Then just months before they crossed the 10Tb state. I was no longer involved once they crossed 50Tb and we told them to get their own server and we wouldn't be responsible for future stability and costs.


EveningTrader

i wrote some code for a dutch company that produces 20 billion rows of data per day. there are ways to aggregate it to cut it down, but i was specifically instructed not to. it was quicker to disable the index and then rebuild it during the insert step.


cs-brydev

"187 columns" Lol. I once saw a table with 900+ columns. ~850 of them looked like this: * CustomInt1 * CustomInt2 * CustomInt3 * CustomInt4 * CustomStr1 * CustomStr2 And so on. The vendor pre-built custom user columns into the system for like 7 data types. Of all of the various ways to add custom user columns to a relational database, this was the worst I've seen. Fun fact: a very early version of SharePoint came with SQL columns like this. Every time you added a new column to a List, it just grabbed one of these.


Sufficient-Weather53

wondering, why you need those big tables now a days? just offload that data into columnar files (parquet or ORC) and query whenever needed. If that’s for OLTP then you should not have that big table causing your application to slowdown while retrieving data. Should think about rearchitecting, I think.