Age | Commit message (Collapse) | Author |
|
While the end-of-file token can also be represented by None,
this is less clear than having an explicit variant. Especially when
it comes to tree construction, the spec explicitly has conditions
named "An end-of-file token", and it's nice if the code for tree
construction can match the spec text closely.
|
|
The HTML spec specifies that the tokenizer emits character tokens.
That html5gum always emitted strings instead was probably just done
to make the token consumption more convenient. When it comes to tree
construction character tokens are however actually more convenient
than string tokens since the spec defines that specific character
tokens should be ignored in specific states (and character tokens
let us avoid string manipulation for these conditions).
This should also make the DefaultEmitter more performant for cases
where you don't actually need the strings at all (or only a few)
since it avoids string allocations. Though I haven't benchmarked it.
|
|
An error isn't a token (in general and also according to the spec).
You shouldn't have to filter out errors when you're just interested
in tokens but most importantly having errors in the Token enum is
annoying when implementing tree construction (since the spec conditions
exhaustively cover all Token variants except Token::Error).
|
|
This is done separately so that the next commit has a cleaner diff.
|
|
|
|
Conceptually the tokenizer emits tokens, which are then handled in the
tree construction stage (which this crate doesn't yet implement).
While the tokenizer can operate almost entirely based on its state
(which may be changed via Tokenizer::set_state) and its internal state,
there is the exception of the 'Markup declaration open state'[1], the third
condition of which depends on the "adjusted current node", which in turn
depends on the "stack of open elements" only known to the tree constructor.
In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this
by adding the adjusted_current_node_present_and_not_in_html_namespace
method to the Emitter trait. What I missed was that adding this method
to the Emitter trait effectively crippled the composability of the API.
You should be able to do the following:
struct TreeConstructor<R, O> {
tokenizer: Tokenizer<R, O, SomeEmitter<O>>,
stack_of_open_elements: Vec<NodeId>,
// ...
}
However this doesn't work if the implementation of SomeEmitter
depends on the stack_of_open_elements field.
This commits remedies this oversight by removing this method and
instead making the Tokenizer yield values of a new Event enum:
enum Event<T> { Token(T), CdataOpen }
Event::CdataOpen signals that the new Tokenizer::handle_cdata_open
method has to be called, which accepts a CdataAction:
enum CdataAction { Cdata, BogusComment }
the variants of which correspond exactly to the possible outcomes
of the third condition of the 'Markup declaration open state'.
Removing this method also has the added benefit that the DefaultEmitter
is now again spec-compliant, which lets us expose it again in the next
commit in good conscience (previously it just hard-coded the method
implementation to return false, which is why I had removed the
DefaultEmitter from the public API in the last release).
[1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state
|
|
|
|
In the next commit I'm adding a test that compares the content
of files and pretty_assertions doesn't omit large portions
of unchanged lines in its diff[1] (contrary to similar-asserts).
(Sidenote: We already depend on similar via insta.)
[1]: https://github.com/rust-pretty-assertions/rust-pretty-assertions/issues/114
|
|
Display impls should return human-readable strings. After
this commit we're able to introduce a proper Display impl
in the future without that being a breaking change.
|
|
Just a bit more succinct. And now rustdoc also no longer
cuts off the names of these Emitter methods in its sidebar.
|
|
Making this change made me realize that adding an
`impl IntoIterator for T` can be a breaking change if
`impl IntoIterator for &T` already exists.
See also the cargo-semver-checks issue[1] I filed about that.
[1]: https://github.com/obi1kenobi/cargo-semver-checks/issues/518
|
|
This has a number of benefits:
* it hides the implementation of the map
* it hides the type used for the map values
(which lets us e.g. change name_span to name_offset while still
being able to provide a convenient `Attribute::name_span` method.)
* it lets us provide convenience impls for the map
such as `FromIterator<(String, String)>`
|
|
|
|
This is primarily done to make the rustdoc more readable
(by grouping Reader, IntoReader, StringReader and BufReadReader
in the reader module). Ideally IntoReader is already implemented
for your input type and you don't have to concern yourself
with these traits / types at all.
|
|
The Tokenizer does not perform any state switching, since
proper state switching requires a feedback loop between
tokenization and DOM tree building. Using the Tokenizer
directly therefore is a bit of a pitfall, since you might
not expect it to e.g. tokenize `<script><b>` as:
StartTag(StartTag { name: "script", .. })
StartTag(StartTag { name: "b", .. })
Since we don't want to make walking into pitfalls
particularly easy, this commit changes the Tokenizer::new
method so that you have to specify the Emitter.
Since this makes new_with_emitter redundant it is removed.
|
|
Previously we mapped the test tokens to our own token type.
Now we do the reverse, which makes more sense as it enables us
to easily add more detailed fields to our own token variants
without having to worry about these fields not being present
in the html5lib test data.
(An alternative would be to normalize the values of these fields
to some arbitrary value so that PartialEq still holds but seeing
such normalized fields in the diff printed by pretty_assertions
on a test failure would be quite confusing).
|
|
|
|
|
|
|
|
|
|
Previously `cargo test` failed because it ran the test_html5lib
integration test, which depends on the integration-tests feature
(so you always had to run `cargo test` with
`--features integration-tests` or `--all-features`, which was annoying).
This commit moves the integration tests to another crate,
so that the dependency on the feature can be properly defined
in a way so that `cargo test` just works and runs the test.
|