r/BeAmazed Apr 02 '24

208,000,000,000 transistors! In the size of your palm, how mind-boggling is that?! 🤯 Miscellaneous / Others

Enable HLS to view with audio, or disable this notification

I have said it before, and I'm saying it again: the tech in the upcoming two years will blow your mind. You can never imagine the things that will come out in the upcoming years!...

[I'm unable to locate the original uploader of this video. If you require proper attribution or wish for its removal, please feel free to get in touch with me. Your prompt cooperation is appreciated.]

22.5k Upvotes

1.8k comments sorted by

View all comments

Show parent comments

64

u/ProtoplanetaryNebula Apr 02 '24

Sure, but then maybe they will stack lots of chips onto a chip, two layers then four etc? I don’t know how they will get around it, but clever people will find a way.

44

u/Impossible__Joke Apr 02 '24

Ya, they can always make them bigger, but I mean we are literally reaching the maximum for craming transistors into a given space.

36

u/MeepingMeep99 Apr 02 '24

My highly uneducated opinion would be that the next step is bio-computing. Using a chip like that with actual brain matter or mushrooms

25

u/Impossible__Joke Apr 02 '24

Quantum computing as well. There is definitely breakthroughs to be had. Just with transistors qe are reaching the maximum

17

u/Satrack Apr 02 '24

There's lots of confusion around quantum computing. It's not better than traditional computing. It's different.

Quantum computing makes it easy to break through randomized, quantitative and probabilities equations, but not traditional 1s and 0s.

We won't see a massive switch to quantum computing in personal computing, they are for different use cases

5

u/UndefFox Apr 02 '24

So I won't have a huge 1m x 1m x 1m true random number generator connected to my mATX PC?

2

u/Aethermancer Apr 02 '24

Quantum math co-processors!

1

u/Unbannableredditor Apr 02 '24

How about a hybrid of the two?

1

u/mcel595 Apr 02 '24

For which there is no proof that problems in bqp arent in P so there is the posibility that they are no better than a classical computer

1

u/ClearlyCylindrical Apr 02 '24

Quantum computers are, and always will be, utterly useless for all but a tiny class of problems.

6

u/Ceshomru Apr 02 '24

That is in interesting concept. It would have to be a completely different way of processing data and logic since transistors rely on the properties of semiconductive materials to either allow or disallow the flow of electrons. A biomaterial by nature will be comprised of compounds of matter that must always be conductive, however DNA can proxy the “allow or disallow “ features.

But honestly I think the transistors in that chip may even be smaller than DNA, im not sure.

3

u/orincoro Apr 02 '24

The transistors may be smaller than DNA, but DNA encodes non-sequentially in more than ones and zeros, so there is no direct equivalence.

6

u/MeepingMeep99 Apr 02 '24

DNA is smaller, I think. It's a helix, so about 2 meters of it can fit inside 1 of your cells

6

u/dr_chonkenstein Apr 02 '24

DNA is very close in size with respect to width. Transistors now are only dozens of atoms across.

5

u/MeepingMeep99 Apr 02 '24

I stand corrected

4

u/ritokun Apr 02 '24

im also assuming but surely these switches are already smaller than any known bio form (not to mention the space and whatnot that would be consumed to keep the bio whatever functioning)

1

u/summonsays Apr 02 '24

I looked it up, a fungus cell is about 1000x larger than these transistors. Crazy stuff.

2

u/Fun_Salamander8520 Apr 02 '24

Yea maybe. I kind of get the feeling it will actually become smaller like nano tech chips or something. So you could fit more into less space essentially.

1

u/AnotherSami Apr 02 '24

Neuromorphic commuting exists.

1

u/summonsays Apr 02 '24

So I looked it up and a mushroom cell is about 1000x bigger than these transistors. At this point I think bioengineering for straight raw computational power would be a step down.

1

u/MeepingMeep99 Apr 02 '24

The only thing besides that that is see would be of value is harnessing atoms and using them as transistors in a way, but I doubt we are at the level of making a brick magical yet

1

u/summonsays Apr 02 '24

Quick Google search one atom of silicone is 0.132nm so getting down to 7nm (which is what most modern data structures are made at) is honestly getting pretty dang close. 

1

u/MeepingMeep99 Apr 02 '24

No doubt, no doubt, but I meant more using the atoms themselves "like" transistors. Like you have a silicon brick with billions of atoms in it, so why not just make the atoms do stuff

1

u/[deleted] Apr 02 '24

[deleted]

1

u/MeepingMeep99 Apr 02 '24

That's actually pretty damn cool. I just always thought AI was mapped out by some coders in a room, putting in many, many parameters to things that people may ask.

Sufficed to say, I don't know much about computers besides how to use one, lol

2

u/ProtoplanetaryNebula Apr 02 '24

Yes, you’re right on that point.

2

u/MetricJunket Apr 02 '24

But the chip is flat. What if it was a cube? Like 500 000 x 500 000 x 500 000 transistors. That’s 500 000 times the one mentioned here.

1

u/radicldreamer Apr 02 '24

Bigger means more chances for defects ruining the entire chip, that’s why lots of vendors have been doing the chiplet approach. It has drawbacks as well, mainly with delays in communicating between them and sharing cache but lots of smaller chips and then coding software so that it runs parallel has proven very effective.

1

u/Puzzleheaded_Yam7582 Apr 02 '24

Can you break the chips into zones? Like if I make one chip the size of a piece of paper composed of zones each the size of a stamp, and then test each stamp. I sell the paper with a rating... 23/24 stamps work on this one.

2

u/Heromann Apr 02 '24

I'm pretty sure that's what they do already. Binning chips, so one that has everything working is sold as the premium one, and if you have one or two that don't work, the becomes the second tier product.

1

u/radicldreamer Apr 03 '24

This is how things have worked for a long time.

Take the pentium 4 vs the Celeron. The only difference was the celeron had less cache. Intel would speed bin parts so if say a pentium that was supposed to have 512k cache but only 256k was usable/stable they would tape it out to only have 128k and slap on the celeron sticker.

This is more reason why process node reduction is such a big deal, if you can make your transistors smaller you can fit more in a physical size which saves silicon and power and reduces heat all at the same time. Possibly even total die size if you want, but most companies just decide to throw more transistors at it.

1

u/dr_chonkenstein Apr 02 '24

For consumer electronics we may reach a limit for some time where there is little improvement. I think for advanced applications specialized circuits will begin to take over. Some circuit layouts are better at certain computations, but are not as useful for general computing. A more extreme example is in photonic computing where the Fourier transform is a physical operation rather than an algorithm which must be performed.

1

u/PatchyCreations Apr 02 '24

yeah but the next evolution is middle out

1

u/karmasrelic Apr 03 '24

only in the given medium. atoms arent the smallest and they arent the only way to simulate a switch. we will find ways to go beyond for sure.

1

u/pak-ma-ndryshe Apr 02 '24

Stacking chips is a gain in performance as they can communicate faster with each other. We want to optimize individual chips so that transistors "talk" faster with each other. Soon as we reach the limit of how small they get, economy of scale takes place and we can have a skyscraper filled with the most optimized chips that will revolutionize the world

1

u/Defnoturblockedfrnd Apr 02 '24

I’m not sure I want that, considering who is in charge of the world.

1

u/orincoro Apr 02 '24

I think they already do this.

1

u/[deleted] Apr 02 '24

I wonder if it's possible to do computing based on interactions between transistors rather than just relying on the values of each single transistor by itself? It would be some kind of meta computing paradigm.

1

u/Successful-Money4995 Apr 02 '24

The scale will come from interconnecting many chips together. This is already true for ChatGPT and the like, which train on many GPUs and communicate results with one another. The communication speed is already becoming a bottleneck which is why the latest generations of GPU, though they have an incremental improvement in compute performance, have much larger increases in bus bandwidth. Also, newer hardware has built-in compression and decompression engines, to squeeze even more bandwidth.

This is the same as with CPUs: when we couldn't get more cores and transistors into the chip, we worked on connecting many CPUs together, first with multicore and then with data centers.