aboutsummaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)Author
2023-09-28break!: emit chars instead of stringsMartin Fischer
The HTML spec specifies that the tokenizer emits character tokens. That html5gum always emitted strings instead was probably just done to make the token consumption more convenient. When it comes to tree construction character tokens are however actually more convenient than string tokens since the spec defines that specific character tokens should be ignored in specific states (and character tokens let us avoid string manipulation for these conditions). This should also make the DefaultEmitter more performant for cases where you don't actually need the strings at all (or only a few) since it avoids string allocations. Though I haven't benchmarked it.
2023-09-28refactor: proxy emit_string calls through utilsMartin Fischer
This is done separately so that the next commit has a cleaner diff.
2023-09-28refactor: move machine impl details to machine moduleMartin Fischer
This commit separates the public API (the "Tokenizer") from the internal implementation (the "Machine") to make the code more readable.
2023-09-28refactor: move utils module under tokenizer::machineMartin Fischer
2023-09-28refactor: only use InternalState re-export for feature-gated internal APIMartin Fischer
2023-09-28refactor: move machine module under tokenizerMartin Fischer
2023-09-28//: remove wrong commentMartin Fischer
Methods defined in another module don't have access to private fields, so the function could very well have been implemented as a method.
2023-09-28break!: remove Token::ErrorMartin Fischer
An error isn't a token (in general and also according to the spec). You shouldn't have to filter out errors when you're just interested in tokens but most importantly having errors in the Token enum is annoying when implementing tree construction (since the spec conditions exhaustively cover all Token variants except Token::Error).
2023-09-28refactor: remove DefaultEmitter::push_error helper fnMartin Fischer
2023-09-28break!: rename Emitter::emit_error to report_errorMartin Fischer
2023-09-28chore: move emit_error method upMartin Fischer
2023-09-28feat: add blanket impl of Reader for boxed readersMartin Fischer
2023-09-27break!: remove Emitter::pop_token, use Iterator insteadMartin Fischer
2023-09-27chore: move bounds to where clauseMartin Fischer
2023-09-12docs: move warning from DefaultEmitter to TokenizerMartin Fischer
2023-09-11chore: move DefaultEmitter to own moduleMartin Fischer
2023-09-09refactor: merge token types with attr to new token moduleMartin Fischer
2023-09-09chore: group public modules togetherMartin Fischer
2023-09-09docs: stop referencing Emitter from token typesMartin Fischer
2023-09-03docs: add spans exampleMartin Fischer
2023-09-03feat: add Doctype::name_spanMartin Fischer
2023-09-03break!: make Doctype name field optionalMartin Fischer
2023-09-03fix!: make comment data spans encoding-independentMartin Fischer
2023-09-03fix: make doctype id spans encoding-independentMartin Fischer
2023-09-03fix!: make set_self_closing encoding-independentMartin Fischer
2023-09-03fix!: make attribute spans encoding-independentMartin Fischer
2023-09-03fix!: make start/end tag name spans encoding-independentMartin Fischer
2023-09-03fix: don't assume UTF-8 in machine/tokenizerMartin Fischer
2023-09-03refactor: inline internal method only used onceMartin Fischer
2023-09-03fix!: make PosTrackingReader encoding-independentMartin Fischer
While much of the span logic currently assumes UTF-8, we also want to support other character encodings, such as e.g. UTF-16 where characters can take up more or less bytes than in UTF-8.
2023-09-03refactor: also use some_offset for start/end tagsMartin Fischer
2023-09-03fix!: calculate tag offsets in Tokenizer instead of Emitter implMartin Fischer
2023-09-03fix: too small char ref error spansMartin Fischer
2023-09-03chore: rename doctype_offset field to some_offsetMartin Fischer
We'll reuse the field for another offset in the next commit.
2023-09-03refactor: proxy init_doctype through TokenizerMartin Fischer
2023-09-03fix: off-by-one missing-semicolon-after-character-reference spanMartin Fischer
2023-09-03fix!: off-by-one end-tag-with-trailing-solidus spanMartin Fischer
2023-09-03fix: most error spans mistakenly being emptyMartin Fischer
With codespan_reporting an empty span shows up exactly like a one-byte span, which is why I didn't notice this mistake earlier.
2023-09-03fix: off-by-one eof error spansMartin Fischer
2023-09-03break!: make Emitter::emit_error take spanMartin Fischer
2023-09-03fix!: wrong attribute value spans for char refsMartin Fischer
2023-09-03chore: move allow lint check attributeMartin Fischer
2023-09-03//: fix outdated internal doc commentMartin Fischer
2023-09-03docs: document character reference resolutionMartin Fischer
2023-09-03docs: document what has been ASCII-lowercasedMartin Fischer
2023-09-03docs: add example for NaiveParser's CDATA handlingMartin Fischer
2023-09-03feat: make DefaultEmitter public againMartin Fischer
2023-09-03fix!: remove adjusted_current_node_present_and_not_in_html_namespaceMartin Fischer
Conceptually the tokenizer emits tokens, which are then handled in the tree construction stage (which this crate doesn't yet implement). While the tokenizer can operate almost entirely based on its state (which may be changed via Tokenizer::set_state) and its internal state, there is the exception of the 'Markup declaration open state'[1], the third condition of which depends on the "adjusted current node", which in turn depends on the "stack of open elements" only known to the tree constructor. In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this by adding the adjusted_current_node_present_and_not_in_html_namespace method to the Emitter trait. What I missed was that adding this method to the Emitter trait effectively crippled the composability of the API. You should be able to do the following: struct TreeConstructor<R, O> { tokenizer: Tokenizer<R, O, SomeEmitter<O>>, stack_of_open_elements: Vec<NodeId>, // ... } However this doesn't work if the implementation of SomeEmitter depends on the stack_of_open_elements field. This commits remedies this oversight by removing this method and instead making the Tokenizer yield values of a new Event enum: enum Event<T> { Token(T), CdataOpen } Event::CdataOpen signals that the new Tokenizer::handle_cdata_open method has to be called, which accepts a CdataAction: enum CdataAction { Cdata, BogusComment } the variants of which correspond exactly to the possible outcomes of the third condition of the 'Markup declaration open state'. Removing this method also has the added benefit that the DefaultEmitter is now again spec-compliant, which lets us expose it again in the next commit in good conscience (previously it just hard-coded the method implementation to return false, which is why I had removed the DefaultEmitter from the public API in the last release). [1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state
2023-09-03//: elaborate on what a proper parser would doMartin Fischer
2023-09-03refactor: simplify Iterator impl for TokenizerMartin Fischer