Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-09-03 | test: test comment data spans more thoroughly | Martin Fischer | |
2023-09-03 | fix: make doctype id spans encoding-independent | Martin Fischer | |
2023-09-03 | fix!: make set_self_closing encoding-independent | Martin Fischer | |
2023-09-03 | fix!: make attribute spans encoding-independent | Martin Fischer | |
2023-09-03 | fix!: make start/end tag name spans encoding-independent | Martin Fischer | |
2023-09-03 | fix: don't assume UTF-8 in machine/tokenizer | Martin Fischer | |
2023-09-03 | refactor: inline internal method only used once | Martin Fischer | |
2023-09-03 | test: verify that span logic incorrectly assumes UTF-8 | Martin Fischer | |
2023-09-03 | refactor: make span tests tokenizer-independent | Martin Fischer | |
2023-09-03 | refactor: let comment and doctype tests check multiple cases | Martin Fischer | |
2023-09-03 | fix!: make PosTrackingReader encoding-independent | Martin Fischer | |
While much of the span logic currently assumes UTF-8, we also want to support other character encodings, such as e.g. UTF-16 where characters can take up more or less bytes than in UTF-8. | |||
2023-09-03 | refactor: also use some_offset for start/end tags | Martin Fischer | |
2023-09-03 | fix!: calculate tag offsets in Tokenizer instead of Emitter impl | Martin Fischer | |
2023-09-03 | fix: too small char ref error spans | Martin Fischer | |
2023-09-03 | chore: rename doctype_offset field to some_offset | Martin Fischer | |
We'll reuse the field for another offset in the next commit. | |||
2023-09-03 | refactor: proxy init_doctype through Tokenizer | Martin Fischer | |
2023-09-03 | test: verify too small char ref error spans | Martin Fischer | |
2023-09-03 | fix: off-by-one missing-semicolon-after-character-reference span | Martin Fischer | |
2023-09-03 | test: verify off-by-one missing-semicolon-after-character-reference span | Martin Fischer | |
2023-09-03 | chore: rename char ref test | Martin Fischer | |
The tests for character reference errors should be grouped together. So this commit puts "char_ref" first in the function name (since our error tests are ordered by function name). | |||
2023-09-03 | fix!: off-by-one end-tag-with-trailing-solidus span | Martin Fischer | |
2023-09-03 | fix: most error spans mistakenly being empty | Martin Fischer | |
With codespan_reporting an empty span shows up exactly like a one-byte span, which is why I didn't notice this mistake earlier. | |||
2023-09-03 | fix: off-by-one eof error spans | Martin Fischer | |
2023-09-03 | test: add span tests for eof errors | Martin Fischer | |
2023-09-03 | break!: make Emitter::emit_error take span | Martin Fischer | |
2023-09-03 | fix!: wrong attribute value spans for char refs | Martin Fischer | |
2023-09-03 | chore: move allow lint check attribute | Martin Fischer | |
2023-09-03 | //: fix outdated internal doc comment | Martin Fischer | |
2023-09-03 | test: verify wrong attribute value spans for char refs | Martin Fischer | |
2023-09-03 | docs: document character reference resolution | Martin Fischer | |
2023-09-03 | docs: document what has been ASCII-lowercased | Martin Fischer | |
2023-09-03 | docs: add example for NaiveParser's CDATA handling | Martin Fischer | |
2023-09-03 | feat: make DefaultEmitter public again | Martin Fischer | |
2023-09-03 | fix!: remove adjusted_current_node_present_and_not_in_html_namespace | Martin Fischer | |
Conceptually the tokenizer emits tokens, which are then handled in the tree construction stage (which this crate doesn't yet implement). While the tokenizer can operate almost entirely based on its state (which may be changed via Tokenizer::set_state) and its internal state, there is the exception of the 'Markup declaration open state'[1], the third condition of which depends on the "adjusted current node", which in turn depends on the "stack of open elements" only known to the tree constructor. In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this by adding the adjusted_current_node_present_and_not_in_html_namespace method to the Emitter trait. What I missed was that adding this method to the Emitter trait effectively crippled the composability of the API. You should be able to do the following: struct TreeConstructor<R, O> { tokenizer: Tokenizer<R, O, SomeEmitter<O>>, stack_of_open_elements: Vec<NodeId>, // ... } However this doesn't work if the implementation of SomeEmitter depends on the stack_of_open_elements field. This commits remedies this oversight by removing this method and instead making the Tokenizer yield values of a new Event enum: enum Event<T> { Token(T), CdataOpen } Event::CdataOpen signals that the new Tokenizer::handle_cdata_open method has to be called, which accepts a CdataAction: enum CdataAction { Cdata, BogusComment } the variants of which correspond exactly to the possible outcomes of the third condition of the 'Markup declaration open state'. Removing this method also has the added benefit that the DefaultEmitter is now again spec-compliant, which lets us expose it again in the next commit in good conscience (previously it just hard-coded the method implementation to return false, which is why I had removed the DefaultEmitter from the public API in the last release). [1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state | |||
2023-09-03 | //: elaborate on what a proper parser would do | Martin Fischer | |
2023-09-03 | refactor: simplify Iterator impl for Tokenizer | Martin Fischer | |
2023-09-03 | chore: use `return` instead of `break` | Martin Fischer | |
2023-09-03 | chore: move ControlToken enum definition to machine | Martin Fischer | |
2023-09-03 | fix: BufReadReader skips line on invalid UTF-8 | Martin Fischer | |
2023-09-03 | test: verify BufReadReader skips line on invalid UTF-8 | Martin Fischer | |
2023-09-03 | docs: document that BufReadReader reads UTF-8 | Martin Fischer | |
2023-09-03 | docs: fix typo | Martin Fischer | |
2023-09-03 | fix(docs): doctype name may be != "html" in HTML documents | Martin Fischer | |
2023-09-03 | fix!: add missing `R: Position<O>` bounds | Martin Fischer | |
It doesn't make sense that you're able to construct a Tokenizer/NaiveParser that you're unable to iterate over. | |||
2023-09-03 | docs: credit Markus in readme | Martin Fischer | |
2023-09-03 | docs: remove description of Emitter trait from readme | Martin Fischer | |
Implementing Emitter methods as no-ops works great with the NaiveParser but less so when you want spec-compliant HTML parsing since that requires tree construction and most Emitter methods to be implemented. Ideally we'll implement both tree construction and a new way of avoiding unnecessary allocations (without having to implement your own Emitter). | |||
2023-09-03 | docs: add 'Compliance & testing' section to readme | Martin Fischer | |
2023-09-03 | docs: add Limitations section to readme | Martin Fischer | |
2023-09-03 | docs: restore accidentally lost code block info string | Martin Fischer | |
I accidentally lost it in b125bec9914bd211d77719bd60bc5a23bd9db579. (I should have changed the info string to ```rust ignore.) | |||
2023-09-03 | docs: add changelog | Martin Fischer | |