TL;DR: This version improves on speed (processor optimizations related), security (windows features only available in 64-bit) and stability (they don't say why, could be related to windows itself).
crash rates for the the renderer process (i.e. web content process) are almost half that of 32-bit Chrome
I wonder how many of those crashes are due to running out of memory, because it otherwise seems quite odd that the 32-bit version would "crash more", unless they (accidentally?) fixed some bugs in the porting process.
Realistically, this means a browser that can use more than ~3GB of RAM, and that has its advantages and disadvantages.
The real advantage is that is the address space randomisation that goes on for security has a way higher pool of address' to use or not use. Actually then using more ram is a bonus.
Is there a potential initial performance impact for a system with lots of RAM and long lived processes? I imagine for general dekstop use its negligible.
I can't imagine why it's not like RAM has to move some read head around. Maybe in the future it could mean large ram devices can't power down disused chips to safe power but I'm sure smart people will solve that kind of this. I barely understand it myself.
I think you're misunderstanding the context. A percentage of users opt in to crash reporting, and from that we see all manner of crashes on user systems. The biggest causes in Chrome tend to be: third-party software (i.e. plugins and injected hooks), intentional termination of web content to prevent resource exhaustion (e.g. out-of-memory), and unstable hardware (e.g. bad memory).
Sorry, but that this included third party software, intentional termination to prevent resource exhaution and unstable hardware was not evident in the sentence:
"crash rates for the the renderer process (i.e. web content process) are almost half that of 32-bit Chrome"
That sort of implied (to me) that this was a measure of instability of Chrome, not of the surroundings.
Users generally don't know or care if the issue is a bug in Chrome or not. To them the problem is just that Chrome is not working. So, we try to fix it where we can. We have code in Chrome to block third-party hook DLLs that we've identified as making the the browser unstable. We have watchdog code to prevent excessive resource consumption and hangs caused by misbehaving web content or plugins. And while we can't fix bad hardware, we do have signals that attempt to narrow it down as a cause.
That's not to say that some portion of crashes in Chrome are not caused by our own bugs that slipped through our various QA and testing processes. Certainly, those do exist, but in terms of crash rates they are dwarfed by the causes I listed previously.
Well it is trying to render the web, which is no small feat in itself given that rather creative ways in which people construct web pages. One of the more complex parts of a web crawler is the part that tries to extract from the "html" the actual content on the page :-) That some pages render at all is pretty amazing to me.
As you can see, Google has managed to cut the "crash rate" of KitKat to less than half of Gingerbread and iOS 7.1, even if the crash rate for those is pretty low, too.
Sorry, but 'x crashed y% of the time' is absolutely meaningless.
Crash rates for software are measured in frequency per total number of hours deployed. 0.9% to 2.5% of the time would translate into 'unusable'.
And the Android crash rates there are also unsupported.
Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
Bridges have a very different quality standard than web browsers do. Chrome doesn't kill you and everyone around you when it crashes.
A better analogy would be bridge designers coming up with a design that required half as much maintenance or half as much material cost to achieve the same factor of safety, or which doubled the expected lifetime of the bridge for the same lifetime cost. Those would be significant improvements in the design, even though eventually the bridge will still have to be replaced (either all at once or piecemeal over its life).
>Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
Only software is not like bridges, and crashes happen and do not bring the end of the world.
Imaging having a new fangled machine, called a telephone, in 1930, only 1 in 10 (10%) calls where dropped mid-call. And then they managed to improve it to 2%.
That's not totally unlike how it was back then (heck, the analog telephone network was like that even in the eighties in some countries I know). And yet nobody thought of the phone network as "unusable" (compared to what? some non-existing non-crashing one?), and nobody blamed engineering in general.
Same for the early decades of the fax, same for the early dial-up internet, etc etc.
> Only software is not like bridges, and crashes happen and do not bring the end of the world.
There's a joke about woodpeckers and software engineering that's a long time favorite of mine. I think that the attention to quality of the product is still vastly behind what we expect as normal from other branches of industry.
If a CPU contains a bug all the software guys are screaming blue murder, how was it possible that this several billion part highly timing sensitive design contained a bug that escaped detection during QA. And yet, as software guys we routinely wave off any bugs as though bugs are simply a fact of life and you'd better get used to them (and to the subsequent crashes).
All I'm saying is that there is something wrong about that picture, not that I have the solution, merely that it feels as though we should do better and should strive to do better. Much better.
The phone is a good example, if only because it's one of the few areas where reliability is top of the requirements list rather than for instance execution speed. That's why it should be no surprise that Erlang has a telecommunications background. It even factors in broken hardware to some extent.
Not exactly 100% relevant or on topic but interesting reading:
>The phone is a good example, if only because it's one of the few areas where reliability is top of the requirements list rather than for instance execution speed. That's why it should be no surprise that Erlang has a telecommunications background. It even factors in broken hardware to some extent.
Yes, but my point was it took half a century or more for the phone network to have the reliability we enjoy today.
(Not to mention how the mobile phone network STILL sucks donkeys balls in large parts of the states, including in highly populated urban areas).
Wall half of North America went down due to a power failure the only thing that still worked was the phone network and that included mobile. I don't know your specific situation but to me things like LTE and thousands of simultaneous users of RF based infrastructure (think stadium with a 50K crowd) is (even though I can picture a lot of what's happening behind the scenes) testament to the effort telcos put into delivering the goods to their end users.
Even if it took half a century for the reliability to be 'right up there' what excuse do we have for sofware then? We're getting quite close to that half century.
>Even if it took half a century for the reliability to be 'right up there' what excuse do we have for sofware then? We're getting quite close to that half century.
For one, software is an ever-changing thing (new requirements, updates, changes to the OS and libraries etc). Something like a telephone network can basically be deployed and then just maintained.
Second, the complexity of our software stack in a modern OS is many times bigger than the phone network's. And it also plays together, with any random program the user might want to install.
Third, what the software can do now, is amazingly different (and more powerful) than what it did in 1950. E.g real time multiple video/audio stream manipulation with filters, and face recognition and what have you (and that's just on one app -- we're also running 20 others at the same time) -- compared to doing some simple business/math calculations.
Whereas the phone network still basically does the same thing: transfers data from one point to another and routes calls. It's an extremely more narrow field.
> Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
As Justin was saying, most Chrome crashes are due to malware (and Firefox sees similar numbers I believe). Broadly speaking, bridges do not get malware.
That also makes me wonder where that statistic came from; I first thought they were measuring the number of crashes encountered on some set of test pages, but that doesn't make all that much sense since if those were bugs, wouldn't they be fixing them (and thus the 32-bit version also)? If this is information from the userbase, then there's definitely an inherent selection bias here: users running a 64-bit OS will likely have newer hardware than those with 32-bit (a larger number of older machines with marginal hardware e.g. CPUs overheating because the heatsinks are clogged with dust, half-failed capacitors, etc.), and so the lower crash rates may not be from 64-bit itself, but because more people with 32-bit are crashing Chrome due to marginal hardware.
Crashes due to hardware failure shouldn't even be amongst those counted. Those are impossible to protect against, every piece of software ever written for 'regular users' (so not avionics, nuclear plant control or so) was spec'd with the basis of 'functional hardware' as the #1 assumption.
That is not always the right assumption to make but for a browser I would say it is a reasonable one.
If you cannot measure a 'crash rate' you're doing something wrong. Because if nothing else, there is a low but present base rate of crashes caused by overclocked CPUs, bad memory, and flaky motherboards.
I worked on software for a hardware system sold by my company. We were able to detect when a supplier changed memory suppliers because of the increased number of 'impossible' crashes.
I think of Chrome as postmodern software; instead of trying to make it not crash they handle crashes cleanly. Considering the incredible complexity of the code this approach may reduce development cost significantly.
TL;DR: This version improves on speed (processor optimizations related), security (windows features only available in 64-bit) and stability (they don't say why, could be related to windows itself).