• 0 Posts
  • 1.28K Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    2 hours ago

    First a fair warning: I learned this stuff 3 decades ago and I’ve actually been working as a programmer since then. I do believe the example I’ll provide still applies up to a point, though CPUs often implement strategies to make this less of a problem.

    =====

    CPU’s are internally like an assembly line or a processing pipeline, were the processing of an assembly instruction is broken down into a number of steps. A rough example (representative but not exactly for any specific CPU architecture) would be:

    • Step 1: fetch assembly instruction from memory
    • Step 2: fetch from into the CPU data in memory that the instruction requires (if applicable).
    • Step 3: execute arithmetic or binary operation (if applicable).
    • Step 4: evaluate conditions (if applicable)
    • Step 5: write results to memory (if applicable)

    Now, if the CPU was waiting for all the steps to be over for the processing of an assembly opcode before starting processing of the next, that would be quite a waste since for most of the time the functionality in there would be available for use but not being used (in my example, the Arithmetic Processing Unit, which is what’s used in step #3, would not be used during the time when the other steps were being done).

    So what they did was get CPUs to process multiple opcodes in parallel, so in my example pipeline you would have on opcode on stage #1, another that already did stage #1 and is on stage #2 and so on, hence why I also called it an assembly line: at each step a “worker” is doing some work on the “product” and then passing it to the next “worker” which does something else on it and they’re all working at the same time doing their thing, only each doing their bit for a different assembly instruction.

    The problem with that technique is: what happens if you have an opcode which is a conditional jump (i.e. start processing from another point in memory if a condition is valid: which is necessary to have to implement things like a “for” or “while” loop or jumping over of a block of code in an “if” condition fails)?

    Remember, in the my example pipeline the point at which the CPU finally figures out if it should jump or not is almost at the end of the pipeline (step #4), so everything before that in the pipeline might be wrong assembly instructions being processed because, say, the CPU assumed “no-jump” and kept picking up assembly instructions from the memory positions after that conditional-jump instruction but it turns out it does have to jump so it was supposed to be processing instructions from somewhere else in memory.

    The original naive way to handle this problem was to not process any assembly instructions after a conditional jump opcode had been loaded in step #1 and take the processing of the conditional jump through each step of the pipeline until the CPU figured out if the jump should occur or not, by which point the CPU would then start loading opcodes from the correct memory position. This of course meant every time a conditional jump appeared the CPU would get a lot slower whilst processing it.

    Later, the solution was to do speculative processing: the CPU tried to guess if it would the condition would be true (i.e. jump) or false (not jump) and load and start processing the instructions from the memory position matching that assumption. If it turned out the guess was wrong, all the contents of the pipeline behind that conditional jump instruction would be thrown out. This is part of the reason why the pipeline is organised in such a way that the result of the work only ever gets written to memory at the last step - if it turns out it was working in the wrong instructions, it just doesn’t do the last step for those wrong instructions. This is in average twice as fast as the naive solution (and better guessing makes it faster still) but it still slowed down the CPU every time a conditional jump appeared.

    Even later the solution was to do the processing of both branches (i.e. “jump” and “no-jump”) in parallel and then once the condition had been evaluated throw out the processing for the wrong branch and keep doing the other one. This solved the speed problem but at the cost of having double of everything, plus had some other implications on things such as memory caching (which I’m not going to go into here as that’s a whole other Rabbit Hole)

    Whilst I believe modern CPUs of the kind used in PCs don’t have this problem (and probably also at least ARM7 and above), I’ve actually been doing some Shader programming of late (both Computing and Graphics Shaders) and if I interpreted what I read correctly a version of this kind of problem still affected GPUs not that long ago (probably because GPUs work by having massive numbers of processing units which work in parallel, so by necessity they are simple) though I believe nowadays it’s not as inadvisable to use “if” when programming shaders as it used to be a few years ago.

    Anyways, from a programming point of view, this is the reason why C compilers have an optimization option of doing something called “loop unrolling” - if you have a “for” loop with a fixed number of iterations known at compile time - for example for(int i = 0; i < 5; i++){ /* do stuff */ } - the compiler instead of generating in assembly a single block of code with the contents of the “for” loop and a conditional jump at the end, will instead “unroll the loop” by generating the assembly for the body of the loop as many times as the loop would loop - so in my example the contents of that “for” loop would end up as 5 blocks in assembly each containing the assembly for the contents, one after the other, the first for i=0, the next for i=1 and so on.

    As I said, it’s been a long time since I’ve learned this and I believe nowadays general CPUs implement strategies to make this a non-problem, but if you’re programming microcontrollers or doing stuff like Compute Shaders to run on GPUs, I believe it’s actually the kind of thing you still have to take in account if you want max performance.

    Edit: Just remembered that even the solution of doing the parallel execution of both branches doesn’t solve everything. For example, what if you have TWO conditional jump instructions one after the other? Theoretically would need almost 4 or everything to handle parallel execution for it. How about 3 conditional jumps? “Is you nested for-loop screwing your performance? More news at 11!”. As I said, this kind of stuff is a bit of a Rabbit Hole.


  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    7 hours ago

    Well, I have an EE Degree specialized in Digital Systems - pretty much the opposite side of Electronic Engineering from the High Power side - and I would be almost as clueless as that guy when it comes to testing a 10,000V fence for power.

    On the other hand I do know a lot of interesting things about CPU design ;)


  • If I’m not mistaking the buzz is because it’s AC hence the buzz frequency is the same as the AC’s.

    Certainly it makes sense that the high voltage would be generated from mains power using a big fat transformer since that’s probably the simplest way to do it.



  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    When it comes to an engineer doing a dangerous job in a domain other than his or her own, I would say that all the engineer knows is how bad things can be fucked up when one is trying to do expert stuff outside one’s own domain, because they’ve been in a position were they were the experts and some non-expert was saying things and trying stuff for their expert domain.

    After seeing others do it in one’s own expert domain one generally realizes that “maybe, just maybe, that’s exactly how I look outside my domain of expertise to the experts of that domain when I open my big fat mouth”.


  • Aceticon@lemmy.worldtoFunny@sh.itjust.worksClever guy
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    7 hours ago

    I’m genuinelly curious were you got that from.

    I actually went and checked the minimum air gap to avoid arcing at 10,000V at standard sea level air pressure and it’s actually measured in millimeters.

    Further, is the voltage differential there between parallel conducting lines or is it between the lines and the ground?

    I’m really having trouble seing how a dry stick would cause arcing between two of those lines short of bringing them nearer than 4 mm in the first case, much less between one of the lines and the ground in the second case if its being held at chest level.

    PS: Mind you, it does make sense with a stick which is not dry - since the water in it makes it conductive - but then the guy himself would be part of the conductive circuit, which kinda defeats the point of using a stick.


  • 33 That night the two girls got their father drunk, and the older daughter went and had sexual relations with him. But Lot did not know when she lay down or when she got up.

    34 The next day the older daughter said to the younger, “Last night I had sexual relations with my father. Let’s get him drunk again tonight so you can go and have sexual relations with him, too. In this way we can use our father to have children to continue our family.”

    35 So that night they got their father drunk again, and the younger daughter went and had sexual relations with him. Again, Lot did not know when she lay down or when she got up.


  • Well, as the guy falling from the top of the Empire State Building was overheard saying on his way down: “well, so far so good”.

    Or as the common caveat given to retail investors goes: past performance is no predictor of future results.

    “So far” proves nothing because it can be “so far” only because the conditions for something different haven’t yet happenned or it simply hasn’t been in their best interest yet to act differently.

    If their intentions were really the purest, most honest and genuine of all, they could have placed themselves under a contractual obligation to do so and put money aside for an “end of life plan” in a way such that they can’t legally use it for other things, or even done like GoG and provided offline installer to those people who want them.

    Steam have chosen to maintain their ability to claw back games in your library whilst they could have done otherwise as demonstrated by GoG which let you download offline installers - no matter what they say, their actions to keep open the option of doing otherwise say the very opposite.


  • Aceticon@lemmy.worldtoScience Memes@mander.xyzDonors
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 hours ago

    You didn’t made a mistake, IMHO.

    Nobody made a mistake.

    There was just a mistmatch between your unvoiced assumptions and those of other people posting here, so all of you were really just starting from different points and hence going in different directions.

    I suppose many downvoters might have assumed you were purposefully taking a specifically literal interpretation of “anonymous” in this context for the purpose of defending the University whilst I myself just went with it being a perfectly valid explanation until proven otherwise that you’re just a more literal person than most.

    This is why I went for writting a post which I believed would provide some clarity rather than downvoting your posts.

    As I see it your points were valid for an interpretation that the University and the article used “anonymous” in the most honest of ways (meaning, “unknown to others”) and other posters pointers were valid for an interpretation that the University and the article used “anonymous” in a deceitful way that didn’t match the dictionary definition but instead meant “unknown to the general public”, something for which the correct word is “undisclosed”.


  • To add to your point, it’s amazing that so many people are still mindless fanboys, even of Steam.

    Steam has restrictions on installing the games their customers supposedly own, even if it’s nothing more than “you can’t install it from a local copy of the installer and have to install it from the Steam servers” - it’s not full ownership if you can’t do what you want with it when you want it without the say so of a 3rd party.

    That’s just how it is.

    Now, it’s perfectly fair if one says “yeah, but I totally trust them” which IMHO is kinda naive in this day and age (personally, almost 4 decades of being a Techie and a gamer have taught me to distrust until there’s no way they can avoid their promises, but that just me), or that one knows the risks but still thinks that it’s worth it to purchase from Steam for many games and that the mere existence of Steam has allowed many games to exists that wouldn’t have existed otherwise (mainly Indie ones) - which is my own posture at least up to a point - but a whole different thing is the whole “I LoVe STeaM And tHeY CaN DO NotHInG wrONg” fanboyism.

    Sorry but they have in place restrictions on game installation and often game playing which from the point of view of Customers are not needed and serve no purpose (they’re not optional and a choice for the customer, but imposed on customers), hence they serve somebody else than the customer. It being a valid business model and far too common in this day and age (hence people are used to it) doesn’t make those things be “in the interest of Customers” and similarly those being (so far) less enshittified than other similar artificial restrictions on Customers out there do not make them a good thing, only so far not as bad as others.

    I mean, for fuck’s sake, this isn’t the loby of an EA multiplayer game and we’re supposed to be mostly adults here in Lemmy: lets think a bit like frigging adults rather than having knee-jerk pro-Steam reactions based on fucking brand-loyalty like mindless pimply-faced teen fanboys. (Apologies to the handful of wise-beyond-their-years pimply faced teens that might read this).


  • It’s even more basic than that: if there’s no escrow with money for that “end of life” “plan” and no contractual way to claw back money for it from those getting dividends from Valve, then what the “Valve representatives” said is a completelly empty promised, or in other words a shameless lie.

    Genuine intentions actually have reliable funding attached to them, not just talkie talkie from people who will never suffer in even the tinyest of ways from not fulfulling what they promised.

    In this day and age, we’ve been swamped with examples that we can’t simply trust in people having a genuine feeling of ethical and moral duty to do what they say they will do with no actual hard consequences for non-compliance or their money on the line for it.

    PS: And by “we can’t trust in people” I really mean “we can’t trust in people who are making statements and promises as nameless representatives of a company”. Individuals personally speaking for themselves about something they control still generally are, even in this day and age, much better than people acting the role of anonymous corporate drone.




  • Aceticon@lemmy.worldtoScience Memes@mander.xyzDonors
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 hours ago

    It sounds like the university called it “anonymous donor” for PR reasons whilst it is in fact “undisclosed donor”.

    Your point only makes sense if indeed the donor was genuinelly anonymous (I.e. even the University had no idea who they were) rather than merely described as anonymous by the University for the purpose of divulging it to the outside world.


  • Aceticon@lemmy.worldtoScience Memes@mander.xyzDonors
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 hours ago

    I was in Finance when the first part of the outcome of that shit hit in 2008 and subsequent years (and I say “first part” because we’re still living it and it looks a lot like there are still more 3rd and further order consequences of it unfolding for people) and damn, that shit really forced me to realize just how evil and hypocrite neoliberalism and neoliberals really are.

    By the way, I absolutely counted as an “useful idiot” up to then.


  • Aceticon@lemmy.worldtoScience Memes@mander.xyzDonors
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    11 hours ago

    It’s the giving a public institution money only if they do a certain thing that can be compared to a bribe, the morality of said “thing” being irrelevant.

    I think it boils down to who has the power: if they start a collection for money to expand their building to have more space and you chose to participate then it’s not you dictating what they do with the money, as all you did was see a cause that you found worthy and contribute to it - the power was entirelly in their hands since they could’ve chosen to collect for a different purpose and you were just a passive agent - whilst if you give them money with the proviso that you get to dictate how it gets used, then the power is in your hands not theirs: the former is more akin to charity and the latter to bribing.

    That said, “bribe” is indeed an imperfect metaphor.



  • I was curious enough to check and with 2KB SRAM that thing doesn’t have anywhere enough memory to process a 320x200 RGB image much less 1080p or 4K.

    Further you definitelly don’t want to send 2 images per-second down to a server in uncompressed format (even 1080p RGB with an encoding that loses a bit of color fidelity to just use two bytes per pixel, adds up to 4MB uncompressed per image), so its either using something with hardware compression or its using processing cycles for that.

    My expectation is that it’s not the snapshoting itself that would eat CPU cycles, it’s the compression.

    That said, I think you make a good point, just with the wrong example - I would’ve gone with: a thing capable of handling video decoding at 50 fps - i.e. one frame per 20ms - (even if it’s actually using hardware video decoding) can probably handle compressing and sending over the network two frames per second, though performance might suffer if they’re using a chip without hardware compression support and are using complex compression methods like JPEG instead of something simpler like LZW or similar.


  • Liberals are just pro-Oligarchy - they think Money should be above the one power which is led by elected leaders: the State - which is against Democracy just like the Fascists, just with a different and more subtle mechanism determining those whose power is above the power of the vote.

    They’re just a different kind of Far-Right from the Fascists, which is why it is so easy for them to support Zionists - which are ethno-Fascists, the same sub-type of Fascism as the Nazis - even while they commit a Genocide.

    People with even the slightest shred of Equalitarian values wouldn’t ever support those commiting ethnic cleansing.


  • It seems to me they’re a country built on 19th century white colonialist values (Jewish white colonialism is no better than the once much more common Christian kind) and which has never evolved from those values but rather kept going until reaching the natural conclusion: Genocide.

    (It’s not by chance that Israelis keep claiming that they have “Western Values” - it’s really just a politically correct way of saying “white values”)

    Israel is similar to South-Africa, except that they were never forced to stop and just kept doubling down on the racism and violent oppression of the ethnicity they victimize.

    I blame mainly the US and Germany for the continued support of Israel’s white colonialism and it’s natural outcome of Genocide.