> The worst possible scenario is that the data/power standard supported by the physical connection isn't optimal. But it will always work.
I don't know what "always work" means here but I feel like I've had USB cables that transmit zero data because they're only for power, as well as ones that don't charge the device at all when the device expects more power than it can provide. The only thing I haven't seen is cables that transmit zero data on some devices but nonzero data on others.
I don't think those cables are in spec, and there are a lot of faulty devices and chargers that don't conform to the spec creating these kinds of problem (e.g. Nintendo Switch 1). This is especially a problem with USB C.
You can maybe blame USB consortium for creating a hard spec, but usually it's just people saving $0.0001 on the BOM by omitting a resistor.
Hamilton thought it was superfluous. Federalist 74 says:
> “The President may require the opinion in writing of the principal officer in each of the executive departments upon any subject relating to the duties of their respective offices.” This I consider as a mere redundancy in the plan; as the right for which it provides would result of itself from the office.
Note that this provision must be redundant even without a unitary executive. Because otherwise, the implication is the only thing the President can do with principal officers is to ask them for an opinion.
Some modern scholars think the provision, though functionally redundant, is there to address a dispute that arose during the debates about executive councils: https://www.yalejreg.com/nc/reconciling-the-unitary-executiv... (“Unsurprisingly, the issue of an executive council arose at the Philadelphia Convention. Several proposals to create a council of state or a privy council were offered. Some of the proposed councils would have provided advice to the President but would not have required that he follow it, whereas others might have required that he secure the consent of the council. But each of the proposals was rejected. Instead, the Convention took language from part of one of the executive council proposals – ‘he may require the written opinions of any one or more members” of the council – as a model for the Opinions Clause.’”).
So the clause is there not to describe what the principal officers must do, but what the President need not do. The President may but does not need to consult his principal officers before taking action.
> otherwise, the implication is the only thing the President can do with principal officers is to ask them for an opinion.
That's not at all the implication... how do you even reach that conclusion? The obvious implication is that the president can only do what he is legally permitted, which means he could do whatever Congress provides for in law, in addition to what's in the constitution. Because, you know, his job is to execute the law. And Congress and the constitution are the ones establishing the legal framework for agencies.
> This I consider as a mere redundancy in the plan; as the right for which it provides would result of itself from the office.
It's nice that Hamilton thought that, but what did those who wrote it think? It seems safe to assume they wrote it for a reason, not as fluff. Which brings us to...
> Some modern scholars think the provision, though functionally redundant, is there to address a dispute that arose during the debates about executive councils [...] the clause is there not to describe what the principal officers must do, but what the President need not do. The President may but does not need to consult his principal officers before taking action.
It's great some modern scholars think this, but this also isn't compelling. If that's what they wanted... they could and should have just said that directly, not left it as a historical puzzle for people to speculate about.
> That's not at all the implication... how do you even reach that conclusion? The obvious implication is that the president can only do what he is legally permitted, which means he could do whatever Congress provides for in law, in addition to what's in the constitution
The constitution doesn’t list any other supervisory powers the president has over officers. So if the Opinions Clause isn’t redundant, Congress needs to spell out all other supervisory authority, down to something as little as asking for opinions.
That reading isn’t just inconsistent with the unitary executive view, it’s inconsistent with every other common view of how the executive works. It would not only mean that Congress can create independent agencies, but that all cabinet officers are independent by default. Nobody seriously thinks that’s true, but that’s the implication of the non-redundant reading of the Opinions Clause.
It’s true that the framers probably didn’t put a redundant power in there just for funsies. But it’s also true that drafters don’t hide elephants in molehills. Article II only mentions executive officers in passing. It would be very odd if the drafters meant to invest them with tremendous independent power only by implication.
> It would not only mean that Congress can create independent agencies, but that all cabinet officers are independent by default. Nobody seriously thinks that’s true, but that’s the implication of the non-redundant reading of the Opinions Clause.
No. "By default"? That's a really weird way to make this sound crazy when it simply isn't. Congress is the branch that creates the departments and creates the legal framework around them in the first place. The president faithfully executes the law. That's not unserious, that's literally the point of the whole system.
A realistic Congress is, generally, not going to pass an act establishing an entire department and somehow neglect to prescribe how the heads are appointed and removed. (!) If it did that for some reason, then yeah, the heads would "by default" be independent, until/unless Congress prescribed otherwise in the future. And... so what? Either that'd be deliberate -- in which case it's equivalent to explicitly prescribing their independence, so it makes no difference unless your real belief is that Congress can't even prescribe this explicitly -- or it would be the result of hundreds of people simultaneously goofing, in which case they can just... fix it by passing another act. Or they deliberately wanted to sow chaos or play games by leaving this unclear, in which case... what else do you expect. In that case it's up to the voters to vote them out, or for courts to rule something if this silly hypothetical ever happens.
All of which is to say: "by default" basically means nothing here. It feels like a pointless argument with an agenda. The idea that the "by default" scenario somehow means some clause was superfluous and deliberately added for funsies is the unserious take here.
This makes things seem more complicated than they already are, I feel?
There's nothing special about auto here. It deduces the same type as a template parameter, with the same collapsing rules.
decltype(auto) is a different beast and it's much more confusing. It means, more or less, "preserve the type of the expression, unless given a simple identifier, in which case use the type of the identifier."
Generally, you don't. I'm not sure why the parent suggested you should normally do this. However, there are occasional specific situations in which it's helpful, and that's when you use it.
1. Consistency across the board (places where it's required for metaprogramming, lambdas, etc). And as a nicety it forces function/method names to be aligned instead of having variable character counts for the return type before the names. IMHO it makes skimming code easier.
2. It's required for certain metaprogramming situations and it makes other situations an order of magnitude nicer. Nowadays you can just say `auto foo()` but if you can constrain the type either in that trailing return or in a requires clause, it makes reading code a lot easier.
3. The big one for everyday users is that trailing return type includes a lot of extra name resolution in the scope. So for example if the function is a member function/method, the class scope is automatically included so that you can just write `auto Foo::Bar() -> Baz {}` instead of `Foo::Baz Foo::Bar() {}`.
1. You're simply not going to achieve consistency across the board, because even if you dictate this by fiat, your dependencies won't be like this. The issue of the function name being hard to spot is easier to fix with tooling (just tell your editor to color them or make them bold or something). OTOH, it's not so nice to be unable to tell at a glance if the function return type is deduced or not, or what it even is in the first place.
2. It's incredibly rare for it to be required. It's not like 10% of the time, it's more like < 0.1% of the time. Just look at how many functions are in your code and how many of them actually can't be written without a trailing return type. You don't change habits to fit the tiny minority of your code.
3. This is probably the best reason to use it and the most subjective, but still not a particularly compelling argument for doing this everywhere, given how much it diverges from existing practice. And the downside is the scope also includes function parameters, which means people will refer to parameters in the return type much more than warranted, which is decidedly not always a good thing.
In what architecture are those types different? Is there a good reason for it there architecturally, or is it just a toolchain idiosyncrasy in terms of how it's exposed (like LP64 vs. LLP64 etc.)?
CHERI has 64-bit object size but 128-bit pointers (because the pointer values also carry pointer provenance metadata in addition to an address). I know some of the pointer types on GPUs (e.g., texture pointers) also have wildly different sizes for the address size versus the pointer size. Far pointers on segmented i386 would be 16-bit object and index size but 32-bit address and pointer size.
There was one accelerator architecture we were working that discussed making the entire datapath be 32-bit (taking less space) and having a 32-bit index type with a 64-bit pointer size, but this was eventually rejected as too hard to get working.
I guess today, instead of 128bit pointers we have 64bit pointers and secret provenance data inside the cpu, at least on the most recent shipped iPhones and Macs.
In the end, I’m not sure that’s better, or maybe we should have had extra large pointers again (in that way back 32bit was so large we stuffed other stuff in there) like CHERI proposes (though I think it still has secret sidecar of data about the pointers).
Would love to Apple get closer to Cheri. They could make a big change as they are vertically integrated, though I think their Apple Silicon for Mac moment would have been the time.
> the impact on the final assessment could be as high as $80
That's the financial impact. Depending on what you're missing, the nonfinancial might be opening yourself to perjury, because you're knowingly claiming a falsehood as a fact on a tax return (even if it's financially in the government's benefit)... never mind potentially screwing up future tax returns in the process.
I don't commonly do this and I don't know many people who do this frequently either. But it depends strongly on the code, the risks, the gains of doing so, the contributor, the project, the state of testing and how else an error would get caught (I guess this is another way of saying "it depends on the risks"), etc.
E.g. you can imagine that if I'm reviewing changes in authentication logic, I'm obviously going to put a lot more effort into validation than if I'm reviewing a container and wondering if it would be faster as a hashtable instead of a tree.
> because I want to make sure my suggestion is correct.
In this case I would just ask "have you already also tried X" which is much faster than pulling their code, implementing your suggestion, and waiting for a build and test to run.
While I'm not sure the particular price point is the biggest problem here, the student license pricing doesn't feel seem that great either. The language is hard enough to learn, and most students won't have time to figure out if they want to buy it with a 15-day trial. They'd probably need half a semester at the very least, unless it's a required part of the curriculum. In the rare case where a student is already familiar enough to know they want it, then four years of $75/year is $300... at that point they may as well just pay $390 for a perpetual personal license, so they can at least keep opening their files in the future.
That said, the parent was talking about it being expensive for use in industry. Personal and student licenses aren't relevant there.
reply