Age | Commit message (Collapse) | Author |
|
The spec refers to them only as RCDATA, RAWTEXT and PLAINTEXT.
See https://rust-lang.github.io/api-guidelines/naming.html.
|
|
Which action the tokenizer takes depending on whether or not an
adjusted current node is present but not in the HTML namespace,
is an implementation detail and shouldn't be exposed in the API.
|
|
|
|
|
|
|
|
|
|
The method never really made much sense since
you could just as well use NaiveParser::new.
|
|
This commit separates the public API (the "Tokenizer")
from the internal implementation (the "Machine")
to make the code more readable.
|
|
An error isn't a token (in general and also according to the spec).
You shouldn't have to filter out errors when you're just interested
in tokens but most importantly having errors in the Token enum is
annoying when implementing tree construction (since the spec conditions
exhaustively cover all Token variants except Token::Error).
|
|
|
|
|
|
|
|
|
|
Conceptually the tokenizer emits tokens, which are then handled in the
tree construction stage (which this crate doesn't yet implement).
While the tokenizer can operate almost entirely based on its state
(which may be changed via Tokenizer::set_state) and its internal state,
there is the exception of the 'Markup declaration open state'[1], the third
condition of which depends on the "adjusted current node", which in turn
depends on the "stack of open elements" only known to the tree constructor.
In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this
by adding the adjusted_current_node_present_and_not_in_html_namespace
method to the Emitter trait. What I missed was that adding this method
to the Emitter trait effectively crippled the composability of the API.
You should be able to do the following:
struct TreeConstructor<R, O> {
tokenizer: Tokenizer<R, O, SomeEmitter<O>>,
stack_of_open_elements: Vec<NodeId>,
// ...
}
However this doesn't work if the implementation of SomeEmitter
depends on the stack_of_open_elements field.
This commits remedies this oversight by removing this method and
instead making the Tokenizer yield values of a new Event enum:
enum Event<T> { Token(T), CdataOpen }
Event::CdataOpen signals that the new Tokenizer::handle_cdata_open
method has to be called, which accepts a CdataAction:
enum CdataAction { Cdata, BogusComment }
the variants of which correspond exactly to the possible outcomes
of the third condition of the 'Markup declaration open state'.
Removing this method also has the added benefit that the DefaultEmitter
is now again spec-compliant, which lets us expose it again in the next
commit in good conscience (previously it just hard-coded the method
implementation to return false, which is why I had removed the
DefaultEmitter from the public API in the last release).
[1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state
|
|
|
|
It doesn't make sense that you're able to construct
a Tokenizer/NaiveParser that you're unable to iterate over.
|
|
|