TL;DR: This version improves on speed (processor optimizations related), security (windows features only available in 64-bit) and stability (they don't say why, could be related to windows itself).
crash rates for the the renderer process (i.e. web content process) are almost half that of 32-bit Chrome
I wonder how many of those crashes are due to running out of memory, because it otherwise seems quite odd that the 32-bit version would "crash more", unless they (accidentally?) fixed some bugs in the porting process.
Realistically, this means a browser that can use more than ~3GB of RAM, and that has its advantages and disadvantages.
The real advantage is that is the address space randomisation that goes on for security has a way higher pool of address' to use or not use. Actually then using more ram is a bonus.
Is there a potential initial performance impact for a system with lots of RAM and long lived processes? I imagine for general dekstop use its negligible.
I can't imagine why it's not like RAM has to move some read head around. Maybe in the future it could mean large ram devices can't power down disused chips to safe power but I'm sure smart people will solve that kind of this. I barely understand it myself.
I think you're misunderstanding the context. A percentage of users opt in to crash reporting, and from that we see all manner of crashes on user systems. The biggest causes in Chrome tend to be: third-party software (i.e. plugins and injected hooks), intentional termination of web content to prevent resource exhaustion (e.g. out-of-memory), and unstable hardware (e.g. bad memory).
Sorry, but that this included third party software, intentional termination to prevent resource exhaution and unstable hardware was not evident in the sentence:
"crash rates for the the renderer process (i.e. web content process) are almost half that of 32-bit Chrome"
That sort of implied (to me) that this was a measure of instability of Chrome, not of the surroundings.
Users generally don't know or care if the issue is a bug in Chrome or not. To them the problem is just that Chrome is not working. So, we try to fix it where we can. We have code in Chrome to block third-party hook DLLs that we've identified as making the the browser unstable. We have watchdog code to prevent excessive resource consumption and hangs caused by misbehaving web content or plugins. And while we can't fix bad hardware, we do have signals that attempt to narrow it down as a cause.
That's not to say that some portion of crashes in Chrome are not caused by our own bugs that slipped through our various QA and testing processes. Certainly, those do exist, but in terms of crash rates they are dwarfed by the causes I listed previously.
Well it is trying to render the web, which is no small feat in itself given that rather creative ways in which people construct web pages. One of the more complex parts of a web crawler is the part that tries to extract from the "html" the actual content on the page :-) That some pages render at all is pretty amazing to me.
As you can see, Google has managed to cut the "crash rate" of KitKat to less than half of Gingerbread and iOS 7.1, even if the crash rate for those is pretty low, too.
Sorry, but 'x crashed y% of the time' is absolutely meaningless.
Crash rates for software are measured in frequency per total number of hours deployed. 0.9% to 2.5% of the time would translate into 'unusable'.
And the Android crash rates there are also unsupported.
Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
Bridges have a very different quality standard than web browsers do. Chrome doesn't kill you and everyone around you when it crashes.
A better analogy would be bridge designers coming up with a design that required half as much maintenance or half as much material cost to achieve the same factor of safety, or which doubled the expected lifetime of the bridge for the same lifetime cost. Those would be significant improvements in the design, even though eventually the bridge will still have to be replaced (either all at once or piecemeal over its life).
>Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
Only software is not like bridges, and crashes happen and do not bring the end of the world.
Imaging having a new fangled machine, called a telephone, in 1930, only 1 in 10 (10%) calls where dropped mid-call. And then they managed to improve it to 2%.
That's not totally unlike how it was back then (heck, the analog telephone network was like that even in the eighties in some countries I know). And yet nobody thought of the phone network as "unusable" (compared to what? some non-existing non-crashing one?), and nobody blamed engineering in general.
Same for the early decades of the fax, same for the early dial-up internet, etc etc.
> Only software is not like bridges, and crashes happen and do not bring the end of the world.
There's a joke about woodpeckers and software engineering that's a long time favorite of mine. I think that the attention to quality of the product is still vastly behind what we expect as normal from other branches of industry.
If a CPU contains a bug all the software guys are screaming blue murder, how was it possible that this several billion part highly timing sensitive design contained a bug that escaped detection during QA. And yet, as software guys we routinely wave off any bugs as though bugs are simply a fact of life and you'd better get used to them (and to the subsequent crashes).
All I'm saying is that there is something wrong about that picture, not that I have the solution, merely that it feels as though we should do better and should strive to do better. Much better.
The phone is a good example, if only because it's one of the few areas where reliability is top of the requirements list rather than for instance execution speed. That's why it should be no surprise that Erlang has a telecommunications background. It even factors in broken hardware to some extent.
Not exactly 100% relevant or on topic but interesting reading:
>The phone is a good example, if only because it's one of the few areas where reliability is top of the requirements list rather than for instance execution speed. That's why it should be no surprise that Erlang has a telecommunications background. It even factors in broken hardware to some extent.
Yes, but my point was it took half a century or more for the phone network to have the reliability we enjoy today.
(Not to mention how the mobile phone network STILL sucks donkeys balls in large parts of the states, including in highly populated urban areas).
Wall half of North America went down due to a power failure the only thing that still worked was the phone network and that included mobile. I don't know your specific situation but to me things like LTE and thousands of simultaneous users of RF based infrastructure (think stadium with a 50K crowd) is (even though I can picture a lot of what's happening behind the scenes) testament to the effort telcos put into delivering the goods to their end users.
Even if it took half a century for the reliability to be 'right up there' what excuse do we have for sofware then? We're getting quite close to that half century.
>Even if it took half a century for the reliability to be 'right up there' what excuse do we have for sofware then? We're getting quite close to that half century.
For one, software is an ever-changing thing (new requirements, updates, changes to the OS and libraries etc). Something like a telephone network can basically be deployed and then just maintained.
Second, the complexity of our software stack in a modern OS is many times bigger than the phone network's. And it also plays together, with any random program the user might want to install.
Third, what the software can do now, is amazingly different (and more powerful) than what it did in 1950. E.g real time multiple video/audio stream manipulation with filters, and face recognition and what have you (and that's just on one app -- we're also running 20 others at the same time) -- compared to doing some simple business/math calculations.
Whereas the phone network still basically does the same thing: transfers data from one point to another and routes calls. It's an extremely more narrow field.
> Imagine if 2.5% of all bridges would collapse due to engineering errors and then they'd improve it by a factor of two and hail that as immense progress...
As Justin was saying, most Chrome crashes are due to malware (and Firefox sees similar numbers I believe). Broadly speaking, bridges do not get malware.
That also makes me wonder where that statistic came from; I first thought they were measuring the number of crashes encountered on some set of test pages, but that doesn't make all that much sense since if those were bugs, wouldn't they be fixing them (and thus the 32-bit version also)? If this is information from the userbase, then there's definitely an inherent selection bias here: users running a 64-bit OS will likely have newer hardware than those with 32-bit (a larger number of older machines with marginal hardware e.g. CPUs overheating because the heatsinks are clogged with dust, half-failed capacitors, etc.), and so the lower crash rates may not be from 64-bit itself, but because more people with 32-bit are crashing Chrome due to marginal hardware.
Crashes due to hardware failure shouldn't even be amongst those counted. Those are impossible to protect against, every piece of software ever written for 'regular users' (so not avionics, nuclear plant control or so) was spec'd with the basis of 'functional hardware' as the #1 assumption.
That is not always the right assumption to make but for a browser I would say it is a reasonable one.
If you cannot measure a 'crash rate' you're doing something wrong. Because if nothing else, there is a low but present base rate of crashes caused by overclocked CPUs, bad memory, and flaky motherboards.
I worked on software for a hardware system sold by my company. We were able to detect when a supplier changed memory suppliers because of the increased number of 'impossible' crashes.
I think of Chrome as postmodern software; instead of trying to make it not crash they handle crashes cleanly. Considering the incredible complexity of the code this approach may reduce development cost significantly.
Nitpick correction for many of the comments here: if I'm not mistaken, Chrome already does/can use more than 3 GiB of RAM with the 32-bit binaries in Windows. Because of how its process sandboxing feature works, it is each helper process (ie tab) that is restricted to 3 GiB, plus the master process that manages (but does not render) them. That's why they've been able to get away with not making the jump to 64-bit for all these years.
1. It's not always single tab. JS heap may need to be shared between multiple tabs due to sharing of objects via `window.opener` and other leaky cross-frame hacks.
2. The process will sooner run out of addresses in the 3GB space rather than actual memory. After running for a while a process may end up with objects allocated all over the address space, so that there isn't enough contiguous address space free for new allocations, even though there's still plenty of free memory in small chunks between live objects.
This is huge for us. Desktop 3D applications were one of the first to move to 64 bit. We have been suffering with 32 bit limitations with http://Clara.io and while it works, getting access to more than 1 GB of ram for JavaScript would be amazing.
What would be great is if Google supported other 64-bit browsers on windows. Mozilla doesn't really officially support it, but they've provided 64-bit nightly builds for quite a while now. For example, the current nighly 64-bit can be found here: https://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/late...
Flash, Java, Silverlight, and so on all have 64-bit variants which work, but there's just no way to get the Google Talk and Hangouts plugin to work on those builds of Firefox.
Seeing the title gave me a bit of hope until I thought "No, of course, Google just bundles the plugin anyways... not like they'll publish a download link for a 64-bit build". I wouldn't even care if you had to click through several "Other system / Select custom download" links, if only it were available at all.
I feel like we have a chicken-egg problem. Firefox isn't offering 64-bit prominently because plugins have terrible support for it, and plugins don't support it because it doesn't exist yet... But, unlike Firefox, plugins can offer 64-bit compatible versions without angering a large number of users. Really, the ball should be in their court, Google included, to provide the options.
Actually, segmentation isn't available on 64-bit Windows kernels (rather than just in x64 mode). So, NaCl already used a different type of sandbox model, where the validator enforces read/write masking.
I thought the main problem with porting software to 64-bit was the bad practice of developers assuming the sizes of types. Since this was Google and the project is relatively recent, I'm assuming that is not the reason.
Why couldn't you take the chromium source (and dependencies) and compile them for x64 months ago?
IIRC most issues revolve around plugins and performance (notably V8), possibly not directly because of type sizes but similar low level stuff that hampers JS-to-the-metal optimisations.
Linux, MacOS, and nearly all other Unix-y things use LP64 (int is 32bit, long is 64bit). Windows x86_64 uses LLP64 (int is 32bit, long is 32bit). This can make porting a pain.
I'm guessing they finally did it now because ARMv8 hardware is coming, and they wanted to have Chrome for Android with 64-bit support for the improvement in performance and crash rate. So they decided they might as well do it for all operating systems now.
There were numerous factors, but that wasn't one of them. The biggest delays were things like the build toolchain and logistics. For example, Microsoft's toolchain couldn't even build a working, optimized, 64-bit Chrome binary until sometime last year.
Kind of ridiculous how late this comes. Also with Chrome not working well with Windows' HiDPI settings, I am not sure why folks on Windows would go with Chrome over Firefox frankly.
As a Linux enthusiast, I cringe every time I read "year of the Linux desktop". (Bear with me, this is on-topic)
Linux is a very usable OS today. Windows, comparatively, is a much less usable OS. USB drivers that need 3 minutes to set up every time I plug my mouse in a different port. More issues with sound than there are with Pulseaudio. And, yes, this is a huge peeve of mine: Only a tiny fraction of windows software is ever ported to 64 bit. WTF?
Almost all linux software is readily available on 64 bit, as 64 bit; this includes Chrome/Chromium from the very, very early days. So now, a major web browser was released as 64 bit on Windows. Congratulations. Maybe 2014 will be the year of the Windows desktop.
I wouldn't call it much less usable, depends on what you're used to I guess.. Actively using both, each for their own task, I find them on par with each other. But I don't expect Windows too give me a super easy to setup server with a mighty commandline, and I don't expect Linux to give me a super smooth Desktop UI that can be used as a digital audio/video workstation, for instance. Cause they both suck at those tasks equally as it's far from their target experience. And I also don't have the WTF feeling you have as I never had the problems you talk about: three minutes? Really? More issues than Pulseaudio? Almost sounds like you try to run Windows 7 on an old pentium with crap drivers :P
I'm running Windows 7 on a current top of the line box.
Speaking of that, when I swapped out my old motherboard for a new one, Windows refused to boot, giving me a bluescreen every time. I googled around and found that this was, in fact, a very common issue and the only solution is a full reinstall. Woah.
More issues than Pulseaudio, yes. In fact, I have zero issues with Pulseaudio and plenty of issues with Windows. Not like stuttering, but devices not being recognized and a terribly confusing UI when it comes to, for example, switching sound playback devices (getting sound to work on my TV has been an awful experience).
To change sound playback device, you simply right-click on the speaker in the system tray > Playback Devices > choose what you want and click on Set Default.
Sounds like most of your issues mostly relate to just not knowing how to do certain things in Windows. Which that's basically the same reason people have issues with Linux.
It's fine to be comfortable in Linux, that's actually a great thing, but Windows is incredibly capable and usually easier to use for more advanced things. The basic UIs are similar enough to not be a big deal.
If you sysprep generalize the machine before you swap the board it'll work afterwards. It's like moaning that your brain transplant resulted in you not being able to move your arms or something similar.
The analogy is unsound, you can do this with any Linux/BSD/Solaris system just fine. It's Windows who requires a more complex procedure here.
So yes, it's user's fault, but the point is that in this particular instance Linux is more user friendly compared to Windows. The user needs more knowledge and expertise to do the same operation with a Windows machine compared to a Linux machine.
If you take the disks out of a Dell and stuff them in an HP machine with the same chipset, Linux gives you no guarantee the Ethernet ports will be the same order on startup.
It's just different and you see the differences as misfeatures.
When I complain about Windows, I'm expected to complain about things that are still present in the latest version of windows (8.1 currently), not things that are no longer an issue since XP. In return, I expect you to do the same when complaining about Linux.
This is a solved problem and systemd is coming to debian as well.
I swapped my motherboard and CPU too a month ago (from an SB i3 on an H61 mobo to an Ivy Bridge Pentium on a B75 mobo) and everything worked fine on the same installation. I only had to install the chipset INF drivers, Ethernet and USB3 drivers.
Most likely you didn't set up the SATA controller on the same mode (AHCI vs RAID vs IDE legacy).
I always thought that Windows gave me a super easy to setup server too because of the fact that I don't have to use the command line.
With Linux server setup, if I haven't done something in a while - I must go and hunt down the information on what files I need to configure and commands that I need to run. Sometimes it takes hours of reading. With Windows I can usually take a look at the standard management interface provided with the server software and just figure it out or be guided through it.
Your use of the fact that "almost all Linux software is readily available on 64 bit" (to imply that Windows developers are doing a bad job of porting software to 64bit) is misleading: there is far, far more software for Windows than there is for Linux. Of course it's not all ported to 64bit. Windows developers also seem to generally respect the notion of backwards compatibility, so I am prepared to cut them some slack.
I think you're being a little bit unfair. (My personal preferences are not flowing into this assessment: I am a Linux enthusiast and in the past two years have only ever used Windows to play video games.)
This is probably because many major developers say they at best would gain ~1% more performance out of the change for a decently large investment of man hours. The opinion of those devs (whether you or I agree with them or not) is that time can be better spent making other improvements to their codebases.
I was using windows from 1995-2005 and I still didn't know half the things it could do. If you're interested in getting this working (which I doubt since you sound like you'd much rather whine about it) you can read how to do it here: https://wiki.archlinux.org/index.php/Laptop_Mode_Tools
Linux is completely unusable for me. For instance, I've never found a good replacement for Calendar on OS X. The nicest calendar program I used on Linux had no CalDAV support (WTF?) and required me to use increment/decrement buttons to input the start/end times of my events. On top of that, it lacked any organisational features such as colour coding or a 5/7 day week view. It felt like a toy.
On OS X, I spend a significant amount of time in the command line but the occasional times I need a GUI, Linux just isn't usable. Most graphical programs lack even the most basic functionality or consideration for user experience. Linux is great for headless environments but it's awful when I occasionally need a GUI.
> I've never found a good replacement for Calendar on OS X
It's funny because iCal is like the shittiest calendar out there. Use Outlook or even Google Calendar.
Let's say you want to do something simple like... find out if your team members are busy before sending a meeting. Oh, in Outlook I just click the Invite tab and find a time, but in iCal they've simplified it so all I have to do is email the entire team to ask if the time is acceptable and then hope nobody re-books during the time it takes to pick a time. "Insanely inefficient!"
Or let's say I want to manage multiple calendars, one with read, one with write access... wait, Mac users (and I'm one of them) don't do businessy things... hipsters don't have jobs!
Want a bonus? If your Mac crashes repeatedly due to kernel panic and you take it in to the "geniuses" at the istore... they tell you to uninstall all 3rd party software. Seriously that's their solution here. "Oh I see you're running 3rd party software, you need to uninstall this." I was at a loss for words. It wasn't, "Hey Office causes some issues..." it was, "You're running 3rd party software," said with a serious sneer. That guy needs to join the ranks of the unemployed. Mac used to be really great, now they just have good hardware.
But no, Outlook has hands down the best calendar tools out there. No question. Everything else is a poor copy.
> Let's say you want to do something simple like... find out if your team members are busy before sending a meeting.
That's a standard task for business calendering, not personal calendering. It also has little to do with the front-end software and much more to do with back-end support. Calender.app supports Free/Busy and business scheduling if you're on a server that supports it like Exchange.
> Or let's say I want to manage multiple calendars, one with read, one with write access...
You can use Calender.app just fine?
You should at least have up-to-date information if you're going to shit on others choices. It hasn't been "iCal" in 2-3 years, which is a good indicator that your information is very out of date.
> Seriously that's their solution here. "Oh I see you're running 3rd party software, you need to uninstall this."
Dell will tell you the same thing. OEMs support their initial configuration. If it KPs in that configuration, they will support it. Likewise, if your Dell bluescreens with the OEM image, they will support that too.
If it's caused by a 3rd party kernel extension in OS X or a bad driver in Windows, they won't support that. No OEM would.
I agree Outlook is the best for business use cases but for a college student, iCal is great. I have a colour for each subject, an "events" calendar (non-academic, recurring) and a "significant" calendar for important events such as exams. The setup works great for me but I can see why it wouldn't scale.
>That guy needs to join the ranks of the unemployed.
I've never dealt with the Apple "Geniuses" but I know a lot of people have and didn't like it at all. To be honest, the only time I've had kernel panics were caused by Little Snitch (now fixed). After an update about a year ago, I've never had one since. For me, OS X is extremely reliable.
Rather than have a different color code for each type of event, I have 3 different calendars on Google Calendars. One is my personal stuff. One is my ex-wife's work schedule, which is relevant because the third one is our daughter's schedule as she goes back and forth between us. I also have some calendars added that are managed externally such as a hockey team's game schedule, etc. Each calendar displays as a different color in the UI. Google Calendar may very well be a great solution for you.
I was surprised after I installed Windows 8 (64 pro or business or whatever it's called), and found Firefox and Chrome to both be 32bit, haven't checked Internet Explorer yet. Now I'm not sure if it even that is even an issue or not? It bugged me anyway.
I've been running a 64bit Linux OS for years now, and the most irritating thing about it was Flash plugin support. Some people deem an OS that can't play iPlayer or some such, as unusable. Anyway flash support is reasonable for me under Chrome these days (didn't Adobe say they'd abandon 64bit support on Linux?)
The missus has Windows 8 on her lappy, and both Chrome and Firefox frequently crap out, she has just gotten used to it.
>didn't Adobe say they'd abandon 64bit support on Linux?
No, Adobe abandoned NPAPI Flash, the one used by Firefox. Flash on Chrome uses PPAPI, which is still officially supported (my guess is that it's maintained/developed mostly by the Chrome guys, but it's just a guess).
TL;DR: This version improves on speed (processor optimizations related), security (windows features only available in 64-bit) and stability (they don't say why, could be related to windows itself).