• 0 Posts
  • 109 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • I was already a dev in a small IT consultancy by the end of the decade, and having ended up as “one of the guys you go to for web-based interfaces”, I did my bit pushing Linux as a solution, though I still had to use IIS on one or two projects (even had to use Oracle Web Application Server once), mainly because clients trusted Microsoft (basically any large software vendor, such as Microsoft, IBM or Oracle) but did not yet trust Linux.

    That’s why I noticed the difference that Red Hat with their Enterprise version and Support Plans did on the acceptability of Linux.



  • CRT monitors internally use an electron gun which just fires electrons at the phosporous screen (from, the back, obviously, and the whole assembly is one big vacuum chamber with the phosporous screen at the front and the electron gun at the back) using magnets to twist the eletcron stream left/right and up/down.

    In practice the way it was used was to point it to the start of a line were it would start moving to the other side, then after a few clock ticks start sending the line data and then after as many clock ticks as there were points on the line, stop for a few ticks and then swipe it to the start of the next line (and there was a wait period for this too).

    Back in those days, when configuring X you actually configured all this in a text file, low level (literally the clock frequency, total lines, total points per line, empty lines before sending data - top of the screen - and after sending data as well as OFF ticks from start of line before sending data and after sending data) for each resolution you wanted to have.

    All this let you defined your own resolutions and even shift the whole image horizontally or vertically to your hearts content (well, there were limitations on things like the min and max supported clock frequency of the monitor and such). All that freedom also meant that you could exceed the capabilities of the monitor and even break it.


  • In the early 90s all the “cool kids” (for a techie definition of “cool”, i.e. hackers) at my University (a Technical one in Portugal with all the best STEM degrees in the country) used Linux - it was actually a common thing for people to install it in the PCs of our shared computer room.

    Later in that decade it was already normal for it to be used in professional environments for anything serving web pages (static or dynamic) along with Apache: Windows + IIS already had a lower fraction of that Market than Linux + Apache.

    If I remember it correctly in the late 90s RedHat started providing their Enterprise Version with things like Support Contracts - so beloved by the Corporates who wanted guarantees that if their systems broke the supplier would fix them - which did a lot to boost Linux use on the backend for non-Tech but IT heavy industries.

    I would say this was the start of the trend that would ultimately result in Linux dominating on the server-side.


  • If it’s part of the Requirements that the frontend should handle “No results found” differently from “Not authorized”, even if that’s just by showing an icon, then ach list of stuff which might or not be authorized should have a flag signalling that.

    (This is simply data analysis - if certain information is supposed to be shown to the user it should come from somewhere and hence the frontend must get it from somewhere, and the frontend code trying to “deduce it” from data it gets is generally prone to the kind of problem you just got because unless explicitly agreed and documented, sooner or later some deduction done by one team is not going to match what the other team is doing. Generally it’s safer just to explicitly pass that info in a field for that purpose to avoid frontend-backend integration issues).

    Authorization logic is almost always a responsibility of the backend (for various reasons, including proper security practices) and for the frontend it’s generally irrelevant why it’s authorized or not, unless you have to somehow display per-list the reason for a it being authorized or not, which would be a strange UI design IMHO - generally there’s but a flag in the main part of the UI and a separate page/screen with detailed authorization information - if the user really wants to dig down into the “why” - which would be using different API call just to fill in that page/screen.

    So if indeed it is required that the frontend knows if an empty result is due to “Not Authorized” rather than “No results found” (a not uncommon design, though generally a good UI design practice is to simply not even give the user access to listing things the user is not authorized to see rather than let the user chose them and then telling them they’re not authorized to do it, as the latter design is more frustrating for users) that info should be an explicit entry in what comes from the backend.

    The JSON is indeed different in both cases, but if handled correctly it shouldn’t matter.

    That said, IMHO, if all those 3 fields in your example should be present, the backend should be putting a list on all 3 fields even if for some the list is empty, rather than a null in some - it doesn’t matter what the JSON is since even at the Java backend level, a List variable with a “null” is not the same as a List variable with a List of length 0 - null vs empty list is quite a common source of mistakes even within the code of just the one tier, though worse if it ends up in API data.

    Who is wrong or right ultimately depends on the API design having marked those fields as mandatory or optional.


  • That sounds like an error in the specification of the client-server API or an erroneous implementation on the server side for the last version: nothing should be signaled via presence or absence of fields when using JSON exactly because, as I described in my last post, the standard with JSON is that stuff that is not present should be ignore (i.e. it has no meaning at all) for backwards compatibility, which breaks if all of the sudden presence or absence are treated as having meaning.

    Frankly that there isn’t a specific field signalling authorized/not-authorized leads me to believe that whomever has designed that API isn’t exactly experienced at that level of software design: authorization information should be explicit, not implicit, otherwise you end up with people checking for not-in-spec side effects like you did exactly for that reason (i.e. “is the no data being returned because of user not authorized or because there was indeed no data to retunr?”), which is prone to break since not being properly part of the spec means any of the teams working on it might interpret things differently and/or change them at any moment.



  • If I remember it correctly, per the JSON definition when a key is present but not expected it should be ignored.

    The reason for that is to maintain compatibility between versions: it should be possible to add more entries to the data and yet old versions of the software that consumes that data should still continue to operate if all the data they’re designed to handle is still there and still in the correct format.

    Sure, that’s not a problem in the blessed world of web-based frontends where the user browser just pulls the client code from the server so frontend and backend are always in synch, but is a problem for all other kinds of frontend out there where the life-cycle of the client application and the server one are different - good luck getting all your users to update their mobile apps or whatever whenever you want to add functionality (and hence data in client-server comms) to that system.

    (Comms API compatibility is actually one of the big problems in client-server systems development)

    So it sounds like an issue with the way your JavaScript library handles JSON or your own implementation not handling per-spec the presence of data which you don’t use.

    Granted, if the server side dev only makes stuff for your frontend, then he or she needs not be an asshole about it and can be more accomodating. If however that data also has to serve other clients, then I’m afraid you’re the one in the wrong since you’re demanding that the backwards compatibility from the JSON spec itself is not used by anybody else - which as I pointed out is a massive problem when you can’t guarantee that all client apps get updated as soon as the server gets updated - because you couldn’t be arsed to do your implementation correctly.


  • Around here, Portugal, were every Summer the temperature exceeds 40 C for at least some days in August, we have outside rollup shades on every window, so one of the tricks is to keep the shades down and and the windows closed during the hottest and sunniest parts of the day, at the very least the afternoon.

    Then at night you open the windows and let the cooler night air in (even better if you do it early morning, around sunrise, which is the coolest time of the day).

    Note that this doesn’t work well with curtains or internal shades, because with those any conversion of light into heat when the light heats the shades/curtains (as they’re not mirrors and don’t reflect all light back) happens inside the house and thus that heat gets trapped indoors.




  • Almost 30 years into my career as a software engineer, I’m now making a computer game that takes place in Space and were planets and comets follow Orbital Mechanics, so I’m using stuff I learned at Uni all those years ago in Degree-level Physics, since I went to university to study Physics (though later changed to an EE degree and ended up going to work as a software developer after graduating because that’s what I really liked to do).

    I’ve also had opportunity to use stuff I learned in the EE degree for software engineering, the most interesting of which was using my knowledge of microprocessor design during the time I was designing high performance distributed systems for Investment Banks.

    (I’ve also used that EE knowledge in making Embedded Systems - because I can do both the hardware and the software sides - though that was just for fun)

    Also, pretty much through my career, I would often end up using University-level Mathematics, for example in banking it tended to be stuff like statistics, derivatives and integrals (including numerical approach methods) whilst game-making is heavy on trigonometry, vectors and matrices.

    So even though I never formally learned Software Engineering at University, the stuff from the actual STEM degrees I attended (the one were I started - Physics - and the one I ended up graduating in - Electronics Engineering) were actually useful in it, sometimes in surprising ways.

    At the very least just the Maths will be the difference between being pretty mediocre or actually knowing what you’re doing in more advanced domains that are heavy users of Technology: I would’ve been pretty lost at making software systems for the business of Equity Derivatives Trading if I didn’t know Statistics, Derivatives, Integrals and Numerical Approach Methods and ditto when making GPU shaders for 3D games if I didn’t know Trigonometry, Vectors and Matrices.

    And this is without going into just understanding stuff I hear about but are currently not using, such as Neural Networks which are used in things like ChatGPT, and Statistics are invaluable in punching through most of the “common sense” bullshit spouted by politicians and other people played to deceive the general public.

    Absolutely, you can be a coder, even a good one, without degree level education, but for the more advanced stuff you’ll need at least the degree level Maths even if a lot of the rest of your degree will likely be far less useful or useless.


  • It’s not about debugging tools.

    Different, high level software designs (i.e. architectural designs) which are normally imposed by the game engine, have different probabilities of the developers who are making the code for those to produce bugs, because of lots of factors including things like of how they approach error validation and handling in the engine itself and in which domains does the engine leave the most freedom to coders and which ones does it leave less - some things are pretty safe to leave in the hands of even bad developers, others are not.

    The example of multi-threading in Unity should’ve been clear: put a game engine that doesn’t impose a single thread pattern in front of somebody with little or no experience in multi-threaded programming and you will have a huge rate of bugs (mainly critical race conditions) and as it so happens most developers out there have little or no experience in multi-threaded programming. Yet multi-threading can yield far more performance in modern CPU since they’re all multi-core. For that specific game engine a software architectural choice was made to go with a structure that is not as performance but significantly less likely to lead to a higher bug rate when used by the average coder, probably because Unity targets less experienced coders.

    Good Senior Designers and Technical Architects don’t design the high level structure of the software for themselves as coders, they do it for the kind of coders that are likely to be coding for it.

    Of course the developers themselves also have different capabilities and hence different baseline rates of creating bugs, hence why I said “both”.


  • It’s both.

    The architectural decisions are at the engine level and that stuff has a massive influence on the likelihood of bugs in the code running in that engine.

    For example, traditional Unity (not ECS) runs all game code (so the code provided by those coding the game) in a single thread, which avoids A TON of multi threading bugs (as that’s one of the hardest parts in programming to master) but is very bad for performance in multi-core CPUs. Game programmers can fire up separate threads using the standard libraries of the programming language itself and manage them, but everything in the development framework that’s part of the engine pushes them to use that single-threaded model, so only advanced devs bother and only for very specific things.

    Also the choice of programming language forced by the engine itself has a huge impact in the likelihood of bugs, but since I don’t want to start a Holy War I’m not going to star pointing fingers at specific languages and criticizing them ;)


  • The EULA part is the fishy one, since EULAs are not valid in most of the World - sellers can’t just after the sale force a change of the implicity contract which is the sale itself (worse, refuse to provide access to the functionality of purchased software after the buyer has fullfilled their part of the contract) so EULAs legally mean nothing except (apparently) in a handful of US states.

    The only “licensing conditions” that legally apply here are the ones agreed between seller and buyer before the sale - determining by payment having been given and accepted - not after the sale.

    (Online services get away with TOS changes because it’s an ongowing service rather than a product sale, so the rules are different).





  • Don’t take this badly but it sounds like you’ve only seen a tiny slice of the software development done out there and had some really bad experiences with Agile in it.

    It’s perfectly understandable: there are probably more bad uses of Agile out there than good ones and certain areas of software development tend to be dominated by environments which are big bloody “amateur hour every hour of the day, every day of the year” messes, Agile or no Agile.

    That does however not mean that your experience stands for the entirety of what’s out there trumphing even the experience of other people who also work in QA in environments where Agile is used.


  • Aceticon@lemmy.worldtoProgrammer Humor@lemmy.mlUsers
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Agile was definitelly taken in with the same irrationality as fashion at some point.

    It’s probably the best software development process philosophy for certain environments (for example: were there are fast changing requirements and easy access to end users) whilst being pretty shit for others (good luck trying to fit it at a proceess level when some software development is outsourced to independent teams or using for high performance systems design) and it eventually mostly came out of that fad period being used more for the right things (even if, often, less that properly) and less for the wrong things.

    That said the Agile as fad phase was over a decade ago.