Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Chrome unlikely to support XPath 3.1 (github.com/whatwg)
143 points by norswap on Nov 1, 2020 | hide | past | favorite | 110 comments




For those replying with "XML is bad and therefore Google is right", reading the whole thread presents a more complex picture. Quoting from various comments in that thread,

> this is a proposal for querying the HTML DOM with XPath, not XML.

> Per https://www.chromestatus.com/metrics/feature/popularity it does seem that about 1-2% of page views end up using XPath

One could argue whether a feature that has existed for over a decade and is only used by 1-2% of page views is worth improving, but that is not the argument that domenic is making here.

> Chrome is not interested in this. The XML parts of our pipeline are in maintenance mode...

> By "XML parts of our pipeline" I mean "everything implemented using libxml and libxslt".

His comment is that in the Chrome codebase, the XPath implementation is within some XML libraries, which are in maintenance mode. This may or may not be true for other browsers. It's interesting that these refer only to the implementation cost to Google, and does not make any reference to costs or benefits to other users of the web platform, which are discussed by other comments.

Overall, while there seem to be valid arguments both for and against the proposal, arguments for are being presented publicly in that thread while those against are being discussed elsewhere (perhaps at Google?), with only final, unchallengeable decisions being posted here. Domenic is fairly explicit in his refusal to engage in a discussion.

> As such, I won't be participating in this thread further. I think I've made our position clear.

Irrespective of whether you agree with the proposal itself, to an outsider like myself it looks like the process of discussion does not exist at WhatWG anymore, and that Google basically dictates terms.

Edit: moved a paragraph for clarity


> One could argue whether a feature that has existed for over a decade and is only used by 1-2% of page views is worth improving, but that is not the argument that domenic is making here.

> It's interesting that these refer only to the implementation cost to Google, and does not make any reference to costs or benefits to other users of the web platform, which are discussed by other comments.

A few months back Chrome shipped Constructible Stylesheets. The proposed specification was strictly opposed by other browser vendors, with Safari stating that they will not ever consider implementing them in the current state [1].

Chrome not only shipped it enabled by default, but refused to bring it back under a flag because "is used on about ~.8% of page loads in Chrome now" [1] (safe to assume, all of them are Google users and users of Google-developed libraries).

It's the most recent example (also backed up by numbers provided by Google themselves). I won't even mention the plethora of standards that other browser vendors consider harmful, but are enabled by default in Google.

So yes, Google is only concerned by the cost to Chrome, never by the costs or benefits to other users of the web platform.

> it looks like the process of discussion does not exist at WhatWG anymore, and that Google basically dictates terms.

That is true. The standards process (both w3c/whatwg and tc39) is hijacked by the same 3-4 people from Google, and decisions are rammed through irregardless of anyone.

[1] https://github.com/WICG/construct-stylesheets/issues/45#issu...

[2] https://github.com/WICG/construct-stylesheets/issues/45#issu...


I am not sure we can fix the web until other browsers actively treat Chrome developers as a hostile entity. Firefox and Edge teams have both been repeatedly screwed by the assumption their counterparts at Google were acting in good faith.


This Twitter thread by Johnathan Nightingale talks about Google sabotaging Firefox: https://archive.is/tgIH9


They are what Joel Spolsky called "architecture astronauts".


1-2% of page views is pretty massive!

Consider adjusting this number to remove any Google sites that are viewed (to remove Google's own bias/quasi browser monopoly) - and it probably goes up a lot more even.


The number of pages with Flash content was also massive for many years. There must be some discipline to remove things from the web platform or it will die.

XPath predates CSS selectors being exposed to JS, and it is absolutely clear that XPath had its best shot to move from niche to mainstream over a decade ago and it didn't happen.


Could a part of the reason that XPath stayed a niche technology that WHATWG and Google continuously chipped away at XML support and compatibility and shipped 20 year old implementations?


> does not make any reference to costs or benefits to other users of the web platform

> arguments for are being presented publicly in that thread while those against are being discussed elsewhere (perhaps at Google?)

It's right there at the end of the highlighted comment:

"we would love to [...] replace them with something that generates less security bugs. Increasing the capabilities of XML in the browser runs counter to that goal."

So security bugs in XML seems to be the main issue?


I’m wondering which security risks they mean. I don’t see any security risk in XML itself, maybe it’s related to some XPath or XQuery functions?


I think more about the fact libxml and libxslt are large pieces of code and have had CVEs raised against them (and fixed) in the past. They are still actively maintained.


I think the idea is the increased security risk of potentially poorly maintained large body of code with a wide surface area.


> His comment is that in the Chrome codebase, the XPath implementation is within some XML libraries, which are in maintenance mode.

Chrome actually has two XPath engines. One is used for querying the HTML DOM with Javascript, the other one in libxml2 is only used for XSLT transformations. (XSLT support is the main reason why Chrome still uses libxml2. If they weren't relying libxslt which in turn requires libxml2, they would probably switch to a better suited XML parser like Expat or write their own.)


If only a tiny proportion of page views use XPath, it would seem it isn't performance critical.

If that's the case, rewrite the whole lot into JavaScript, run it in the browser sandbox, and rip out the libxml that has security concerns. Then invite pull requests to add new versions of whatwg things.

As soon as a browser API is written in JavaScript and only using other public API's, it becomes a near zero maintenance burden.


As of the last couple years, Chrome is the web. The writing has been on the wall since at least Widevine. The browser wars are over, and I don't see any scrappy startup crossing the moat of complexity GOOG has built around "modern" browsers. Especially when combined with their stranglehold of the standards committees. We need to find a new attack vector.


They’ll do it to themselves.

One thing I see regularly is Chrome’s web caching being so aggressive it is driving front-end developers to start using FF for dev.

We simply need to stand back and let time take its course.


Does Chrome no longer have the option to disable cache while dev tools is open?


It does have it. I personally often use Firefox for the css grid and flexbox tools but they're very similar in functionality.


A product owner I worked with around 2016 regularly had problems with this, eventually we figured out whenever this happened the only way to clear her cache was to open dev tools, long-press reload, and pick the 3rd option (it only showed up when dev tools were open).


It sure does and it works, not sure how or what use case the parent has problems with. Would be quite interesting to find out.


Not the parent, but I have run into these ones before:

Issue 645845 (Open, ~4 years old): DevTools: "disable cache" doesn't work for media resources: https://bugs.chromium.org/p/chromium/issues/detail?id=645845

Issue 470030 (WontFix): 'Disable cache' should prevent HSTS redirects: https://bugs.chromium.org/p/chromium/issues/detail?id=574345

Issue 775435 (Fixed after ~1 year): Disable cache does not disable cached redirects https://bugs.chromium.org/p/chromium/issues/detail?id=775435


Oh, it's fine keeping the development tools open for local dev, but then it seems like developers (usually more junior) will get spooked when they open the application in a new tab, loading from the test server, and their changes appear to not have been built out, or only half built out. It causes some confusion, and I often need to remind others that if they are using Chrome, to please clear their cache.

The other thing that I should have mentioned is with non-dev staff. When new builds and updates get pushed out, they often have problems because of Chrome's aggressive caching. This group isn't as privy to knowing about developer tools and chaching, and it is highly frustrating and annoying for them. It seems like they will still use Chrome for casual browsing, but then start to use Firefox or Safari for testing builds.

I use Firefox 100% of the time and very very rarely have these kinds of issues.

All I can say is that something is seriously wrong with Chrome's caching policy, from a UX perspective. People do get frustrated in ways that they don't when using Safari or Firefox.


I mean, to some degree you need to take advantage of build options that put out resources under unique paths, which I'm fairly certain is standard guidance by now.


Yes, I should implement scorched-earth cache busting and force a complete reload of all bundles, because one craptastic browser relies on aggressive caching so that their complimentary ad services doesn’t bog down website performance. That’s exactly how the web should be.


... Yes? I mean, if you're not whole into the js ecosystem you can just use a cache buster parameter on your js requests in development, but I think the recommended approach is to use module splitting so that parts of the bundle that don't change remain the same.

Alternatively if you just hate caching you can set cache control headers on your js.

I think tying caching behavior to Google's web sites is silly, caching is a crucial part of performance for any heavy web application.


... No.

Every other browser seems to find a sane middle ground.

Anyway this was never about my own practices, this was about how Chrome has become annoying and how it will result in attrition. The faster this happens, the better.


Things looked similar amid the dominance of IE. I think it will probably happen eventually, though you're right that a start-up probably won't do it. The problem that no one pays for web browsers. There are companies building alternative engines for commercial embedding (including one that's parallel and very fast), so it would probably involve one of those being open-sourced.


I mean some people working on Chrome tried to remove XSL support at some point:

https://bugs.chromium.org/p/chromium/issues/detail?id=514995

which was already nuts, so they are clearly not interested in anything related to XML. XML is a fantastic tool, it's just that a whole generation of developers refused to even touch it and are biased against it.

And since Chrome is now basically the web... wait til there is only Chrome and Safari left and see what happens to web "standards"...


I dont know, i think im biased against it because i have touched it (even of xslt is quite cool)


I changed my opinion of XML after I had to deal with yaml.

Whoever thought it is a good idea for complex configs (kubernetes) was wrong


> Whoever thought it is a good idea for complex configs (kubernetes) was wrong

Kinda agree, YAML is a mess wrt. many aspects.

> I changed my opinion of XML after I had to deal with yaml.

I still didn't change my opinion about XML, it is similar a mess wr. many aspects.

For both there is a sane sub-set which isn't a mess and which looks reasonable fine. Sadly this subset has not necessary clear borders.

So I guess a sub-set of XML, i.e. XML reinvented in 2020 would probably be better then YAML. But then you could also do a YAML reinvented in 2020 ;=)


When we invented XML we did not have configuration files as a major use case- although the possibility of ubiquity had been hinted at in a speech by Michael Sperberg-McQueen earlier that year, comparing the potential of SGML to the infrastructure in tunnels beneath the city of Chicago.

There's an XPath from 2016 (3.1) but it didn't have any significant input from browser people - so the primary audience represented were people with large, complex, or multitudinous documents.

There's also microxml, although its traction is limited because of a (relatively minor) backward compatibility issue around whitespace in attribute values.

The use case of people editing small simple documents by hand, documents that did not contain mixed content (rich text if you like, with markup inside running text) wasn't major; in creating XML we did discuss having a syntax that distinguished elements that could contain text directly from those that could only contain other elements, but none of the suggestions were compatible with HTML, of course, or any other existing SGML vocabulary, and all of the proposals (including mine) were ugly and had flaws.

The primary advantages of using XML are

* you can use the XML stack - XPath, XSLT, XQuery, RelaxNG, XML Schema, XForms, EXI, etc etc - on your documents (including conf files)

* people who don't think of themselves as programmers can do sane powerful & useful things - the languages are declarative;

* XML documents can be in the problem domain - elements named after things they represent, not "div" and "span" for example

* you can write a custom grammar-based validation check that both gives some rudimentary QA and also helps to drive syntax-directed editors, hierarchical database schemas and so forth;

* syntax errors are fatal, and there's some redundancy in close tags, which help catch errors that often can't be caught easily by checking the data (e.g. for metadata)

For sure there are things we'd do differently with hindsight, but remember also that XML predates the success of JavaScript and CSS. There are things that would be different in JavaScript and CSS today, too, if done again.


> you can use the XML stack - XPath, XSLT, XQuery, RelaxNG, XML Schema, XForms, EXI, etc etc - on your documents (including conf files)

A super complex stack with questionable security properties and composes in a way that is the opposite of simple.

> people who don't think of themselves as programmers can do sane powerful & useful things - the languages are declarative;

Not sure how that distinguishes it from other markup languages. What are we comparing to that is imperative? TeX?

> XML documents can be in the problem domain - elements named after things they represent, not "div" and "span" for example

I mean i guess xml works better for data serialization than a soup of html. That is not exactly a high bar.

> you can write a custom grammar-based validation check that both gives some rudimentary QA and also helps to drive syntax-directed editors, hierarchical database schemas and so forth;

You can do that with any format you can parse into to an abstract syntax tree. I suppose xml is convinent compared to writing your own custom format in so much as you already have an off the shelf parser, but we're comparing different serialization formats, not having no serialization format.

> syntax errors are fatal, and there's some redundancy in close tags, which help catch errors that often can't be caught easily by checking the data (e.g. for metadata)

That's pretty standard for all data serialization formats. If instead of data serialization you are instead talking about mark-up languages, that's almost always considered a bad thing.

> For sure there are things we'd do differently with hindsight, but remember also that XML predates the success of JavaScript...

I think that's pretty debatable when it comes to javascript. Regardless though, hindsight may be 20:20, but there is still room to look at fundamental design decisions. People may quibble about aspects of css and js, but the fundamental design choices turned out to be pretty solid to this day. XML on the other hand comes off as a weird compromise that tries to do too many things, doesn't do any of them all that well, resulting in an ecosystem of over the top complexity.


So... JSON?

Honestly if json had comments and allowed trailing commas it would be perfect.


It also would at least need a first class base64 type IMHO.

Oh and NaN,-Inf,+Inf.


XML isn't the most enjoyable thing to write (although we don't seem to have a problem when it comes to HTML and JSX). There's the whole debate around when to use attributes or children, and all that, which I can see would be a problem if the schema is poorly defined.

But then, it's not like YAML is any better with how confusing its whitespace rules can be, and all the short-hand ways of avoiding explicit syntax (see: do you indent arrays in a hash or not? What's up with having to double-indent in some cases but not another?). And given the length of many yaml-based configs, it becomes pretty much useless once you're a thousand lines in, which you'll easily approach with a CI config.

At least with XML, you could actually attach a stylesheet to it (even just CSS) and you'd automatically have a human readable version of the file, without having to bother with converting YAML or JSON or whatever. That'd simplify the generation of a lot of internal dashboards.


kubernetes mostly uses json internally, yaml is only the "cli" side of things. kubectl converts the yaml files to json. (https://kubernetes.io/docs/concepts/overview/working-with-ob...)


I think enterprise Java gave xml a bad name. I just hate having to go into a Wildfly config.


The European Securities and Exchange Commission (ESMA) has just now mandated all listed companies in the EU to publish their annual financial statements in XHTML instead of PDF or paper. Combined with XBRL taxonomies it’s a very powerful structured format. Please don’t forget that XML is widespread in business applications and I haven’t seen any similar powerful replacement for XML validation and XPath. Please let me know if you know one.


There isn't an alternative, largely i think because XML people come at the problem from the idea that documents are primary, not just artifacts of programs.


Very good comment. I wholeheartedly agree.

Also, there is a whole world outside of software development: The world, that uses this software. And that world produces documents, that need to be managed. And those, who manage them, are users, not programmers.


Data-driven architecture is the future. Data has to be driving the software, not the way around.


Safari tends to be slow to implement features, but they rarely outright reject implementing a feature.

Google Chrome hasn't implemented MathML yet, even though Firefox and Safari have: https://caniuse.com/mathml


Safari has been holding the web back for years on features that make the web competitive with Apple's app ecosystem. I don't believe the distinction of "rejecting" vs being slow has merit.

https://caniuse.com/push-api https://caniuse.com/mediasource


Safari vs. iOS Safari is a distinction that needs to be made and people need to keep in mind. Apple's a lot more likely to allow stuff in desktop that it won't even consider for iOS. I mean your own link for media source has it being fully supported since 2014 in desktop Safari.


Maybe I'm wrong, but I think mobile safari is most safari usage.


Technically true but is it mean anything? Support on iOS Safari is the important thing.


Apple has refused to implement a large number of features, at least as many as Google. They're usually more quiet about it though unless it's part of marketing related to privacy. All the browsers take strong architecture and ecosystem positions.


Actually, they do refuse sometimes, not that I’m against it, but it’s inaccurate to say they don’t.

https://www.zdnet.com/article/apple-declined-to-implement-16...


There “due to privacy concerns” could still be a real motivation.


The MathML support in Firefox and Safari are barely usable. Even if my target audience all browsed on Firefox and Safari, I'd avoid MathML.



I followed your first link on Safari. Some things look fine. Some things are ugly, which is too bad but tolerable. If it was just that, I might consider MathML. But some things are rendered in a way that changes their meaning, and that’s totally unacceptable.


MathML support in Blink is under active development.

https://bugs.chromium.org/p/chromium/issues/detail?id=6606


Don’t let perfect be the enemy of good. Use MathML, put up noticea that your content only works in Safari and Firefox. These are all attacks against Google. FIGHT!


Have you misunderstood me? MathML does not work well even in Safari and Firefox, and that is why they are avoided.


>MathML does not work well even in Safari and Firefox

Parent understood that, that's why they said "Don’t let perfect be the enemy of good".

They mean "It might not work too well, but it works well enough to use".


I understood well what was said about Safari and Firefox.

I may have misunderstood how well it works/doesn’t in Chrome. For right or wrong I understood that MathML didn’t work at all in Chrome.

If there’s some support for a feature, even if it’s ropey, and if there’s no support for that feature in Chrome, then using that feature attacks Chrome. People will have to use other browsers in order to use that feature. They might then continue using that browser.


MathML is being added to WebKit; an implementation offered for Safari was rejected years ago.


I think that’s misleading since Safari’s development is not generally discussed outside the occasional WebKit blog post and the rare Safari developer on Twitter who may or may not rightfully ignore any questions/arguments.

A lot of features are implemented late and incorrectly. I think that the first Safari version that supported PWAs was so broken that many had to manually exclude the browser in their support detection code.


Search for rniwa on GitHub, he's one of the core developers, and is active in all discussions (including meeting notes that are often published in GitHub issues).


XML is undeniably a huge attack vector with a shrinking user base. I think that’s a very valid argument for not expanding its surface area.


Is it an attack vector? Can you elaborate further?


No. XML libraries increase the attack _surface_, but XML itself is passive. There was a famous "billion laughs" attack, but JavaScript was also vulnerable to this, as is any language that lets you concatenate strings. The usual fix is a max size on such strings. There are a couple of other potential vulnerabilities, such as the ability to access the content of any URL, including file://etc/group, but again JavaScript has that too.


Exactly


This discussion is a decade late. The direction was set when HTML5 was chosen instead of XHTML 2.0 and I don't think any important group changed their mind since. Committee-made standards are impotent without popular implementations.


HTML5 does have a standard XML serialization, which subsumes the "XHTML 2.0" use cases (such as extensibility via other XML namespaces).


Sure and what can you use it for in the browser? SVG at most. XInclude? I don't think so. XHTML 2.0 was about much more than this.


i guess the non-inflammatory title would be “representative of chrome project votes no in one of many working group proposals “?


They didn't just vote no. They basically said they're not implementing it even if everyone else votes yes.


Like my mother used to say: if all of your friends jumped off a bridge, would you jump too?


Like my mother used to say: "A standards working group decision should be binding to its members or there are no standards".


That's not how WHATWG is governed.


Too bad for the WHATWG.



Let’s deprecate old technologies that are currently bundled in web browsers instead of adding to them. Browsers need to go on a diet.


browsers are quite fast nowadays.

they appear to be bloated mainly because of two reasons:

1) companies pulling in all sort of un-optimized content and making requests to a huge amount of external services, effectively slowing down the render of pages

2) a lot of developers developing stuff that should really be native code in html+js+css, which doesn't always perform that well and tends to slow down the whole browser. it really suffices one badly-coded app (not using well asynchronicity and/or wasting memory) to make the whole browser appear slow.


No, the browser itself is bloated.

For instance, the RPM for the Firefox I'm using right now is 100 megabytes, and that's an already compressed format. After installing, just one of the components (libxul.so, which has most of the executable code) is over 110 megabytes.

Each new feature will only increase this size even more.


Usually most of the size is various UI assets, not features.


Here at least, firefox depends on libxml2, which comes from a different package. But, bloated is in the eye of the beholder - there's always Dillo and lynx.


Which break on tons of sites because they simple can't render them and never will. That bloat isn't really bloat because it is necessary to handle what modern users want it to handle. Browsers aren't going to probably ever "diet" again. They'll only remove insecure features.


I feel like XPath queries have been gaining more widespread usage over the last decade by developers at the css/javascript level, so perhaps it makes sense to improve the power of it.


There is no need to improve the power of xpath in browsers, they already have a scripting langage, they don’t need a second one.

What would be useful however is improving the precision, xpath 1.0 had a pretty restricted set of functions which hampers filtering, you either need to bring too much back to JS or the selector becomes overly complex with painful cross-language escaping, especially when dealing with HTML trees (token lists are really painful to deal with just 1.0’s functions set).

Making exslt & applicable later functions available would be a huge boon without needing to change the langage itself.


I thought XPath was terrible beyond 1.0?

E.g. https://tomforb.es/xcat-1.0-released-or-xpath-injection-issu...

But it does not seem to be mentioned as an argument.


XPath beyond 1.0 can do the same things, that XPath 1.0 could do. You still can do your nifty one-liners. But it can do so much more now! Especially since it is a subset of XQuery now, and that's where things become very interesting.


I feel like a real dummy right now... Aren't modern web pages xhtml??? Isn't xhtml xml???

Removing support for XPath kills MANY tests. I feel like a crazy person. Can someone help with context?


Modern web pages are HTML. You can write HTML in a way that it is compliant with the XHTML standard and therefore also valid XML, but it will still be rendered by the HTML engine in the browser, not the XML engine.


No, xhtml died.

Pages are html and not neccessary xml compatible.

XPath is used for html independent of xml (because the structure is very similar and as such querying html with xml based query systems can work).

But for google it's part of their xml code base they don't want to touch.


Would have been less controversial if they just rename it to hpath.


The Google team is right. It had a great run, but it's time to let XML die.


XML is far from dead ask most enterprise companies. XPath in particulare can be very useful in finding nodes in an HTML tree.


Have they proposed an alternative?


YAML of course. /s


HTML and JavaScript. Works for PDF with PDF.js

XML is no longer relevant to the modern web


PDFs don't use HTML.

Standard PDFs are also not XML based.

PDF.js ab-uses HTML to somehow render PDF.

The HTML used by PDF.js is in no way a sane "human readable structured data" format. But a "browser compatible" ad-hoc representation for the underlying PDF representation.

Lastly PDF are not formats for structured data, which is what XML is all about in the end.

So I have no idea why you thing HTML+Js is a sane replacement for XML use cases, it isn't. It isn't a replacement to a degree that IMHO it doesn't even really makes sense (the JS part).


> PDF.js ab-uses HTML to somehow render PDF.

nah it mostly uses canvas and svg.


I can’t tell if this was a serious response, I am not trying to pull your leg.


It’s a very serious response.

XML is a defective technology that has no place in the modern web. Perhaps the only successful bit is SVG, for lack of a better alternative. MathML is a failure — katex and mathjax do a fundamentally better job of rendering mathematics on the web — and are based on what people who write a lot of math actually use: tex and friends.

If you need to interpret XML documents as HTML, use some javascript. The attack surface reduction of eliminating XPath, XHTML, XSLT and other mistakes like microformats is worth it alone.

I stand by this, having implemented enough of XPath, XML 1.1 and XSLT to implement WS-Security from scratch (have fun with c14n!).

The sooner we move on from the failed experiment of XHTML, the better. The idea that the browser is the means of extending the core document model is gone; most if not all power resides in the JS engine. If it makes sense to stick in the core browser engine, then it will be obvious when that is so via usage statistics.

You can compile libxml2 to wasm if you must (i’ve done this when I needed a more complete XPath implementation)


XML is rather successful outside of the web in spite of all the vitriol poured on it. And modern web is under a specific and unique combination of pressures to serve as a reference for the rest of software development. If you looking for something that deserves the name of "defective" you don't need to go any further.

For example, the problem of math is not only to render it in the browser. At the very least we may want to render it on the server and to index it. And since math expressions are in a document, there's a general need to process them programmatically for a variety of purposes, some of which are not even clear at the moment. With the KaTeX or MathJax solution each such scenario would have to include KaTeX or MathJax or a custom parser for the underlying TeXlike language the only upside of which is that it's somewhat well-known and more or less easy to write. With MathML these and other scenarios can be handled with the standard XML toolchain. (And this doesn't mean we need to exclude that neat TeXlike language if we need to input it: we only need to add a step that transforms it into MathML.) MathML, is, of course, not simple, but it addresses both presentational and semantic sides of a formula, something that no other solution does. It's complex because the math is complex.


Dude you're delusional. The web only succeeded because it was built on declarative tech, and JS is an the opposite of that.

Don't mix the WS-* trash with XPath/XSLT -- still the only standard data transformation technology. Last I check the JSON folks were still trying to reinvent XML schema? The JSON stack has nothing on XML in terms of maturity and features.


What format do you recommend the Digital Humanities folks should use?


This is an editorialized link name and the target comment isn't even obviously wrong or anything. Let the awfully outdated XML tooling die.

Context: This was named "Chrome holding the web back: an example", I expect moderation will change this at some point to something like "Chrome unlikely to support XPath 3.1" which is considerably less incendiary and something few people here care about.


More like: "Chrome unlikely to follow WHATWG XPath 2.0+ support spec if ever merged"

Or "Chrome devs being against XPath 2+ spec because of internal maintenance problems"


Please just let XML die. Just because it existed doesn't mean it has to exist forever. Will there be pain points in getting rid of it for good? Yeah. But we'll be better off in the long run.


I don’t agree. Never touch a running system! I think you have no idea in how many software implementations XML plays an important role. Just because it’s not hot and fancy it does not mean it’s bad.


I don't care about hot and fancy. I care that XML is a usability nightmare and makes my eyes bleed. It should not be used in anything new. Part of achieving that goal is to not continue to implement and support it.


Where is XML a usability nightmare?


No one addresses why the company that dominates web search should have an outsize influence on standards in an area where it has an obvious conflict of interest in adopting technologies that _may_ empower potential competitors, aka ie the rest of the world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: