aboutsummaryrefslogtreecommitdiff
path: root/integration_tests
AgeCommit message (Collapse)Author
2023-09-28chore: rename internal states as wellMartin Fischer
See the previous commit.
2023-09-28break!: remove CdataActionMartin Fischer
Which action the tokenizer takes depending on whether or not an adjusted current node is present but not in the HTML namespace, is an implementation detail and shouldn't be exposed in the API.
2023-09-28feat: implement BasicEmitterMartin Fischer
2023-09-28break!: move offsets out of TokenMartin Fischer
Previously the Token enum contained the offsets using the O generic type parameter, which could be a usize if you're tracking offsets or a zero-sized type if you didn't care about offsets. This commit moves all the byte offset and syntax information to a new Trace enum, which has several advantages: * Traces can now easily be stored separately, while the tokens are fed to the tree builder. (The tree builder only has to keep track of which tree nodes originate from which tokens.) * No needless generics for functions that take a token but don't care about offsets (a tree construction implementation is bound to have many of such functions). * The FromIterator<(String, String)> impl for AttributeMap no longer has to specify arbitrary values for the spans and the value_syntax). * The PartialEq implementation of Token is now much more useful (since it no longer includes all the offsets). * The Debug formatting of Token is now more readable (since it no longer includes all the offsets). * Function pointers to functions accepting tokens are possible. (Since function pointer types may not have generic parameters.)
2023-09-28refactor: make TracingEmitter only work with usizesMartin Fischer
2023-09-28chore: add BasicEmitter stubMartin Fischer
2023-09-28break!: rename DefaultEmitter to TracingEmitterMartin Fischer
2023-09-28refactor: decouple run_test_inner from DefaultEmitterMartin Fischer
2023-09-28break!: add Token::EndOfFileMartin Fischer
While the end-of-file token can also be represented by None, this is less clear than having an explicit variant. Especially when it comes to tree construction, the spec explicitly has conditions named "An end-of-file token", and it's nice if the code for tree construction can match the spec text closely.
2023-09-28break!: emit chars instead of stringsMartin Fischer
The HTML spec specifies that the tokenizer emits character tokens. That html5gum always emitted strings instead was probably just done to make the token consumption more convenient. When it comes to tree construction character tokens are however actually more convenient than string tokens since the spec defines that specific character tokens should be ignored in specific states (and character tokens let us avoid string manipulation for these conditions). This should also make the DefaultEmitter more performant for cases where you don't actually need the strings at all (or only a few) since it avoids string allocations. Though I haven't benchmarked it.
2023-09-28break!: remove Token::ErrorMartin Fischer
An error isn't a token (in general and also according to the spec). You shouldn't have to filter out errors when you're just interested in tokens but most importantly having errors in the Token enum is annoying when implementing tree construction (since the spec conditions exhaustively cover all Token variants except Token::Error).
2023-09-28chore: build html5lib_tests::Output laterMartin Fischer
This is done separately so that the next commit has a cleaner diff.
2023-09-03break!: make Doctype name field optionalMartin Fischer
2023-09-03fix!: remove adjusted_current_node_present_and_not_in_html_namespaceMartin Fischer
Conceptually the tokenizer emits tokens, which are then handled in the tree construction stage (which this crate doesn't yet implement). While the tokenizer can operate almost entirely based on its state (which may be changed via Tokenizer::set_state) and its internal state, there is the exception of the 'Markup declaration open state'[1], the third condition of which depends on the "adjusted current node", which in turn depends on the "stack of open elements" only known to the tree constructor. In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this by adding the adjusted_current_node_present_and_not_in_html_namespace method to the Emitter trait. What I missed was that adding this method to the Emitter trait effectively crippled the composability of the API. You should be able to do the following: struct TreeConstructor<R, O> { tokenizer: Tokenizer<R, O, SomeEmitter<O>>, stack_of_open_elements: Vec<NodeId>, // ... } However this doesn't work if the implementation of SomeEmitter depends on the stack_of_open_elements field. This commits remedies this oversight by removing this method and instead making the Tokenizer yield values of a new Event enum: enum Event<T> { Token(T), CdataOpen } Event::CdataOpen signals that the new Tokenizer::handle_cdata_open method has to be called, which accepts a CdataAction: enum CdataAction { Cdata, BogusComment } the variants of which correspond exactly to the possible outcomes of the third condition of the 'Markup declaration open state'. Removing this method also has the added benefit that the DefaultEmitter is now again spec-compliant, which lets us expose it again in the next commit in good conscience (previously it just hard-coded the method implementation to return false, which is why I had removed the DefaultEmitter from the public API in the last release). [1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state
2023-08-19break!: remove type param defaults from TokenizerMartin Fischer
2023-08-19chore: switch from pretty_assertions to similar-assertsMartin Fischer
In the next commit I'm adding a test that compares the content of files and pretty_assertions doesn't omit large portions of unchanged lines in its diff[1] (contrary to similar-asserts). (Sidenote: We already depend on similar via insta.) [1]: https://github.com/rust-pretty-assertions/rust-pretty-assertions/issues/114
2023-08-19break!: stop abusing Display for Error codesMartin Fischer
Display impls should return human-readable strings. After this commit we're able to introduce a proper Display impl in the future without that being a breaking change.
2023-08-19break!: rename doctype _identifier methods/fields to _idMartin Fischer
Just a bit more succinct. And now rustdoc also no longer cuts off the names of these Emitter methods in its sidebar.
2023-08-19feat: impl IntoIterator for AttributeMapMartin Fischer
Making this change made me realize that adding an `impl IntoIterator for T` can be a breaking change if `impl IntoIterator for &T` already exists. See also the cargo-semver-checks issue[1] I filed about that. [1]: https://github.com/obi1kenobi/cargo-semver-checks/issues/518
2023-08-19break!: introduce AttributeMapMartin Fischer
This has a number of benefits: * it hides the implementation of the map * it hides the type used for the map values (which lets us e.g. change name_span to name_offset while still being able to provide a convenient `Attribute::name_span` method.) * it lets us provide convenience impls for the map such as `FromIterator<(String, String)>`
2023-08-19feat!: add offset to commentsMartin Fischer
2023-08-19break!: stop re-exporting reader traits & typesMartin Fischer
This is primarily done to make the rustdoc more readable (by grouping Reader, IntoReader, StringReader and BufReadReader in the reader module). Ideally IntoReader is already implemented for your input type and you don't have to concern yourself with these traits / types at all.
2023-08-19break!: merge Tokenizer::new_with_emitter into Tokenizer::newMartin Fischer
The Tokenizer does not perform any state switching, since proper state switching requires a feedback loop between tokenization and DOM tree building. Using the Tokenizer directly therefore is a bit of a pitfall, since you might not expect it to e.g. tokenize `<script><b>` as: StartTag(StartTag { name: "script", .. }) StartTag(StartTag { name: "b", .. }) Since we don't want to make walking into pitfalls particularly easy, this commit changes the Tokenizer::new method so that you have to specify the Emitter. Since this makes new_with_emitter redundant it is removed.
2023-08-19refactor: decouple html5lib_tests from html5tokenizerMartin Fischer
Previously we mapped the test tokens to our own token type. Now we do the reverse, which makes more sense as it enables us to easily add more detailed fields to our own token variants without having to worry about these fields not being present in the html5lib test data. (An alternative would be to normalize the values of these fields to some arbitrary value so that PartialEq still holds but seeing such normalized fields in the diff printed by pretty_assertions on a test failure would be quite confusing).
2023-08-19refactor: split off reusable html5lib_tests crateMartin Fischer
2023-08-19refactor: separate test logic from html5lib-test parsingMartin Fischer
2023-08-19test: enable previously skipped tokenizer testMartin Fischer
2023-08-19break!: remove set_last_start_tag from EmitterMartin Fischer
2023-08-19refactor: move html5lib test to own crate to fix `cargo test`Martin Fischer
Previously `cargo test` failed because it ran the test_html5lib integration test, which depends on the integration-tests feature (so you always had to run `cargo test` with `--features integration-tests` or `--all-features`, which was annoying). This commit moves the integration tests to another crate, so that the dependency on the feature can be properly defined in a way so that `cargo test` just works and runs the test.