Hacker Newsnew | past | comments | ask | show | jobs | submit | Ezku's commentslogin

An interesting piece featured in the article: “Concept and Location Neurons in the Human Brain Provide the ‘What’ and ‘Where’ in Memory Formation”, Nature Communications 2024 (https://doi.org/10.1038/s41467-024-52295-5)

This wasn’t in the article, but I feel it makes for good background reading: “Universal Principles Justify the Existence of Concept Cells”, Scientific Reports 2020 (https://doi.org/10.1038/s41598-020-64466-7)


This book is probably about a very different kind of ”engineering” than what you had in mind, but it’s been highly influential to my thinking:

“The Social Construction of Reality: A Treatise in the Sociology of Knowledge.” Berger & Luckmann 1966.

Perhaps the core insight to me is that not only does every practice of engineering exist as embedded in the context of a socially constructed reality, but the practice of engineering itself also fundamentally involves the continual construction of such realities. In other words, for a software engineer to be able to do their job, they must among other things be a kind of applied social epistemologist.

I expect this framing doesn’t make much sense to many readers — I’m hoping the following articles might serve to illustrate:

“Programming as Theory Building.” Peter Naur, Microprocessing and Microprogramming 1985 (https://doi.org/10.1016/0165-6074(85)90032-8)

> … suggests that programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.

“Interpretation, Interaction and Reality Construction in Software Engineering: An Explanatory Model.” Kari Rönkkö, Information and Software Technology 2007 (https://doi.org/10.1016/j.infsof.2007.02.014)

> Floyd’s paper Outline of a Paradigm Change in Software Engineering requested that we move from a product oriented paradigm to a process oriented paradigm.

> Naur’s paper Programming as Theory Building made it painfully clear to us that exemplary resources in the form of material and available support are not enough when modifying others’ programs. In fact, if Floyd’s claims had been taken seriously by the software developers in Naur’s study, and if the same developers had access to an explanatory model … their difficulties could have been both anticipated and prevented.

> This article … explains from a natural language point of view, how interpretation takes place, and discusses the consequences of this in relation to interaction and reality construction in software engineering practice.


Research citation & abstract, for other PDF-challenged individuals:

Braddy, Jon. “Utilizing the Octothorpe (#).” Exchanges: The Interdisciplinary Research Journal, April 2022. https://doi.org/10.31273/eirj.v9i2.837.

”Felix Guattari’s ‘Schizoanalytic Cartographies’ acts as a methodological blueprint and can be used to explain a subject’s lack of expressivity when confronted by Foucauldian systems of discipline and punishment. Understanding mechanisms of regulation and control within a closed or open system is the purpose of cybernetics. This communication studies tradition emerged from the artificial intelligence work of Norbert Wiener’s data flows with the intentional purpose of steering people, thought, societies, and the cosmos towards—becoming. Using the analogy of the octothorpe (#) as a roadmap explaining the four cartographies (Flows, Phyla, Territories, Universes) as outlined by Guattari, this manuscript will analyze the film, ‘War Games,’ demonstrating schizoanalytic technique. Another layer of power over humanity is not a panacea, rather it ushers forth civilisation’s speedier, and predicted, demise.”

Here’s also a web version of the PDF, as processed by Readwise Reader: https://readwise.io/reader/shared/01hjkgjb3qb9408fk8pg8xap3x


Reads like another Sokal Hoax to me.


At least it has some movie recommendations.


Your point is a very important one. Do we check the interactions of a tentative drug molecule against every single molecular target in the body, and do we track what happens as a result of all those interactions? It’s quite common that a group of ”usual suspects” on the researchers’ radar get checked for — but by necessity, a lot of ground goes uninvestigated.

  > Consider valproic acid: For many decades, its only use was in laboratories as a "metabolically inert" solvent for organic compounds
Speaking of ”inert” solvents:

  > Dimethyl sulfoxide (DMSO) is the most common organic solvent used in biochemical and cellular assays during drug discovery programs.
  > Despite its wide use, the effect of DMSO on several enzyme classes, which are crucial targets of the new therapeutic agents, are still unexplored.
  > 1-4% (v/v) DMSO, the commonly used experimental concentrations, showed ∼37-80% inhibition of human acetylcholine-degrading enzyme, acetylcholinesterase (AChE)
(DMSO: A Mixed-Competitive Inhibitor of Human Acetylcholinesterase. ACS Chem Neurosci. 2017 https://pubmed.ncbi.nlm.nih.gov/29017007/)

Oops! How many investigations on specific drugs were in fact showing mostly the results of what happens when interfering with one of the most ubiquitous-yet-underappreciated signalling systems, the cholinergic system?

I’m hoping widespread & systematic application of modern methods like in-silico molecular docking studies will lead to much fewer such oversights.


That's what controls are for, you're always comparing against the DMSO control. I also don't think anyone who's doing small molecule screens thinks DMSO is inert.


Is it really considered acceptable not to link to the scientific research you are basing your news on?

Pubmed link (with abstract); full text is paywalled:

> A neuro-metabolic account of why daylong cognitive work alters the control of economic decisions. Curr Biol. 2022 https://pubmed.ncbi.nlm.nih.gov/35961314/


From the comments here, I was expecting to find myself hopelessly out of date, and to end up with a migraine trying to parse through a mindnumbing list of changes. Turned out I was mistaken.

  > Me: oh, cool, they fixed so many tiny things I had bumped up against
  > Some others: oh no, why are things changing
I'm not getting it. Maybe I'm reading this wrong, but to me these seem pretty obvious small issues to smooth over.


The way JS/TS change _feels_ a lot more haphazard.

For example why introduce a new method to support negative indexing. Supporting `array[-1]` instead of `array.at(-1)` would mean one less thing to remember.

Many of the changes make the language feel like a hodge podge made from parts of other languages. This lack of cohesion is IMO what makes upgrading the language always feel like moved cheese.


Because your example is a breaking change, and breaking changes are hard to make in a runtime that needs to reasonably support two decades worth of web content.

For example, if you have a `binarySearch` function that returns -1 if an element isn't found, a developer might do something. `const result = arr[index]; if (result !== undefined) { ... }`. This would then start returning the last element instead of undefined at that index.


We already have things like "use strict", because of backwards compatibility. Following the same idea, we could have something like "use ES2023" or something along those lines. Issue with JavaScript is that browsers have in-flux implementations of new features (as browser parent companies see it fit for their usage), and there's no cohesive point in time release process. I think "living" standards, are part of the reason why the web stack is so jumbled.

But what do I care, whatever mess and complexity arises from these "good enough" implementations is left for the generation after us to deal with :)


In some sense, the messiness of the living standard is also what enables large-scale web archival.

A lot of old code in other languages may be hard or impossible to compile and run without significant work.


Exactly right. `arr[-1]` means `arr["-1"]` and already does something.


It is also a breaking change to use new syntax and functions since old browser does not support new features. In this perspective `arr[-1]` seems a fair breaking change.


No, because changing browsers to interpret `arr[-1]` as `arr[arr.length - 1]` breaks existing sites that expect `arr[-1]` to be interpreted as `arr['-1']`: That is, the value stored on object `arr` at key name '-1'.

Changing browsers to interpret `arr.get(-1)` as `arr[arr.length - 1]` doesn't affect any old code using `arr[-1]`.

It's not about supporting old browsers. It's about supporting old code.


I think you're confusing your application with the language itself.

Adding new syntax and functions to the language is not a breaking change. Old code will continue to work.

If you start using these new features in your application, and it no longer works on old browsers, then sure that's a breaking change. But that's a choice for you to make. The language is still backwards compatible.


`indexes[haystack.indexOf(needle)] = true`

There's a valid example of code that would be broken (`indexOf` returns `-1` as "not found"). Is it a good way of solving whatever the author was trying to do? Probably not, especially now that sets exist. Is it code you might conceivably find on hunreds of sites across the past decades of the world wide web? You bet.

Yes, we could introduce another "use strict". But we only just got rid of the one via ESM (which enforces strict mode). That was a one-off hacky solution to a hard problem coming off the end of a failed major version release of the language (look up ECMAScript 4 if you get a chance). We don't want to see a repeat of that.


Why is there still no simple way of handling changes like this?

Surely there should be a simple way to have a header in each file with the language version, and then the file will be interpreted as that version?


All of this was hashed out during the "Harmony"[1] days. Versioned-JS was of course one possible future. Maybe even still is. But the prevailing decision coming out around that time and leading to ES5 and ES2015: We'll add "use strict" as a single-point-in-time breaking opt-in upgrade to fix a lot of the common problems, but let's otherwise stick to "One JavaScript"[2].

You may find [2] and [3] especially enlightening to understanding this thinking, and any other discussions from ES Discuss on the topic if you fell like digging into history.

[1] https://johnresig.com/blog/ecmascript-harmony/

[2] https://2ality.com/2014/12/one-javascript.html

[3] https://esdiscuss.org/topic/es6-doesn-t-need-opt-in


Well, "use stricter" and "use strictest" are still available...


!important


Maybe this is simple in implementation, but it's definitely not simple in developer experience.

You grab some code in one of your old projects for implementing a binary search. Can you copy-paste it into a new project that targets a newer language version?

The question isn't as simple as "does it have syntax errors", because we're talking about changing semantics here. Given a set of semantic changes and a piece of code, figuring out (either as a human or a computer) whether the observable characteristics of that code have changed is somewhere between vexing and impossible. It's entirely possible, for example, that your code encounters changed semantics, but not in a way that changes the actual behavior of the code.

In this world it just becomes very, very difficult to reason about extremely common operations; it'd be a constant source of frustration. There's a good reason you rarely see languages versioning their behavior in impactful ways.


> Why is there still no simple way of handling changes like this?

This is nothing JS specific. Breaking changes are breaking changes. If you can, don't introduce them.

> simple way to have a header in each file with the language version

One special aspect that differentiates JS from other languages:

It's both a language AND a universal runtime. A lot of JS that's executed is not JS that's written by humans but generated by a compiler/transpiler.

So adding a layer of header versioning is not a big win in terms of developer experience: It would anyways be the deployment toolchain that's responsible to deal with such a versioning scheme. It would ideally be invisible to the developer.


To be pedantic adding an at function is a breaking change too.


Not really.

You can add a polyfill to check if `Array.at()` exists, and if it doesn't, create a function that does the same thing and add it to the `Array` object, so now all `Array.at()` code works as expected.

Then once every environment you target supports `Array.at()` by default, you can remove the polyfill to reduce the size of your code.


    if (array.at) {
      explode();
    }
Adding the at function would break the above code which depends on array.at being undefined.


They test a lot of websites before introducing new methods. Something somewhere may break but it's very unlikely and this pragmatic approach allows progress.

This is also why the language got Array.prorotype.flat instead of flatten (flatten was breaking an old version of a popular library called Mootools): https://developer.chrome.com/blog/smooshgate/


And extending javascript's built-in objects has been considered bad practice since at least 2007.

Before that point, browser environments were so different that you needed to write code per-browser. Those theoretical concerns didn't really matter since in-practice you were essentially coding the same app in different scripting languages.


> extending javascript's built-in objects has been considered bad practice since at least 2007

Totally. It's just extending the prototype that causes the problem though, not extending from (class myclass extends array). This causes a lot of confusion among new js devs so I underline this on every opportunity.


When I look at clever TypeScript, I always think the code was written by either Haskell or C++ template metaprogramming refugee, and this isn't good for the longevity of the language, see what happened to Scala's adoption because of it.


Typescript will have no adoption problem. It is the defacto only compile to js language supported in NPM. It is also ergonomic to JS devs who want or are forced (at work) to use typing and an option on most scaffolding tools like NextJS or CRA. The fact that TS is actually very good just confounds this more!

It will live as long ad JS is popular. The main threat is web assembly which will make JS compatibility seem quaint.


Something should be pointed out to outsiders looking at typescript, otherwise they might get the wrong idea about the language.

In practice there seems to two different sides of typescript code.

1. Regular projects: tend to have rather easy to understand/write/maintain code.

2. Dependency code: tends to have code-golfed metaprogramming that the IDE uses to auto-magically autocomplete and highlight issues in regular project code.


While I'm an C++ refugee, most of the "clever" TS stuff I write is due to JS framework idioms creating duplicate work(redux-reducer typings...) or forcing 'any' escape doors to the degree TS use doesn't help. Doesn't mean I'm "above" using any escape hatches where appropriate though or keeping things "dumb" (most of my TS code is fairly monomorphic).


After coding it with it daily for the past year I can think of very few times I've had to write "clever" TypeScript. And half of those are now solved with TypeScript's new `satisfies` operator :P

The only consistently annoying thing about it is how dumb it plays with JavaScript's built in array methods but it still is sufficiently smart 80% of the time (and the other 15% a simple `as const` does the trick)


Yeah, this is why I wince every time TS gets another feature.

IMO it was entirely good enough for what it does quite a while ago. No need to add more—that's purely introducing risk (of the language's ecosystem getting worse, mainly) from my perspective.


On the contrary, many of the new TS features over the last year or more have been around making type inference smarter and allowing for less explicit typing


Yep! Or making type checking safer (this list is enormous), or better aligning TS specifics with how JS is actually used (upcoming 5.0 getting import specifiers more or less right as a great example).


Developers love hating on JavaScript, on change, and on "complexification." This is all of the above.

The first comment I read after yours is literally "I hate how much programming languages change"


Agreed. My main complaint is that some of these changes don’t filter down fast enough for my liking, because I’ve bumped into the issues they fix, often enough.

That, and for various reasons, it’s easy to use "import" everywhere in browser-side code, but painful to use "import" in Node. That’s a major selling point for ESBuild in my mind—I can avoid dealing with Node as much.


Yeah, the language changes seem pretty minor and incremental. I think the more interesting changes in the last few years have been with engines. All major browsers now support ESM modules and custom elements.


The big hurdle is a lot of new jargon that sounds more complicated than what it's doing.


Perhaps not many in this audience are dealing with having to mitigate the prospect of dementia. How about something close to the horizon for almost all of us?

> Vitamin D deficiency can cause over‐activation of the pulmonary renin‐angiotensin system (RAS) leading to the respiratory syndrome. RAS is regulated in part at least by angiotensin‐converting enzyme 2 (ACE2), which also acts as a primary receptor for SARS‐CoV‐2 entry into the cells. Hence, vitamin D deficiency can exacerbate COVID‐19, via its effects on ACE2.

— Vitamin D and COVID-19: Role of ACE2, age, gender, and ethnicity. J Med Virol. 2021 https://pubmed.ncbi.nlm.nih.gov/33990955/

> Angiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point into cells for SARS-CoV-2 which attaches to human ACE2, thereby reducing the expression of ACE2 and causing lung injury and pneumonia. Vitamin D, a fat-soluble-vitamin, is a negative endocrine RAS modulator and inhibits renin expression and generation… Therefore, targeting the unbalanced RAS and ACE2 down-regulation with vitamin D in SARS-CoV-2 infection is a potential therapeutic approach to combat COVID-19.

— A brief review of interplay between vitamin D and angiotensin-converting enzyme 2: Implications for a potential treatment for COVID-19. Rev Med Virol. 2020 https://pubmed.ncbi.nlm.nih.gov/32584474/

Here’s a caveat to remember when taking vitamin D:

> Majority of the adults are deficient in both vitamin D and magnesium but continue to go unrecognized … Mg is essential in the metabolism of vitamin D, and taking large doses of vitamin D can induce severe depletion of Mg. Adequate magnesium supplementation should be considered as an important aspect of vitamin D therapy.

— Magnesium Supplementation in Vitamin D Deficiency. Am J Ther. 2019 https://pubmed.ncbi.nlm.nih.gov/28471760/


Infrared is the missing part: https://youtu.be/wadKIiGsDTw Especially for covid, dementia, autoimmune diseases



FWIW, I think the article agrees.

> vitamin D effects were significantly greater in females versus males and in normal cognition versus mild cognitive impairment.

> vitamin D effects were significantly greater in apolipoprotein E ε4 non-carriers versus carriers.

Translation: If you’re already cognitively impaired, vitamin D can do little to fix that. Same goes for if you’re at high risk for Alzheimer’s genetically (ApoEε4), or generally at higher risk for neurovascular diseases (male).

However, as I’m sure others in the comments will point out — supplementing vitamin D will at least ensure you’re not getting cognitive impairment due to vitamin D deficiency, which seems to absolutely be a thing.


Peripheral and central serotonergic systems are both relevant:

> Because 5-HT cannot cross the blood-brain barrier, the peripheral 5-HT system is functionally separate from the central 5-HT system. — Diabetes Metab J. 2016 https://pubmed.ncbi.nlm.nih.gov/27126880/


Well spotted!

> This is a modification of IBM's Plex font.

https://github.com/iaolo/iA-Fonts/tree/master/iA%20Writer%20...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: