Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
The file is linked from the README.md, so it should also be included.
|
|
This bug resulted in e.g. "<script></SCRI" being wrongly tokenized as:
StartTag(StartTag { name: "script", self_closing: false, attributes: {} })
Char('<')
Char('/')
Char('s')
Char('c')
Char('r')
Char('i')
EndOfFile
Note that the Char tokens should be uppercase. (This bug could only be
observed when properly doing state switching via tree construction.)
|
|
|
|
The Debug formatting is more readable when the name comes first.
|
|
|
|
These variants actually don't need to be exposed (to the tree builder).
|
|
Most of the time you'll only be interested in the attribute value,
so the `get` method should directly return it instead of a wrapper type.
This also makes the API more predictable since e.g. the DOM getAttribute
method also returns a string. Lastly previously it was quite confusing
that map[key] wasn't equivalent to map.get(key).unwrap().
|
|
|
|
See the previous commit.
|
|
The spec refers to them only as RCDATA, RAWTEXT and PLAINTEXT.
See https://rust-lang.github.io/api-guidelines/naming.html.
|
|
Which action the tokenizer takes depending on whether or not an
adjusted current node is present but not in the HTML namespace,
is an implementation detail and shouldn't be exposed in the API.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Previously the Token enum contained the offsets using the O generic
type parameter, which could be a usize if you're tracking offsets or
a zero-sized type if you didn't care about offsets. This commit moves
all the byte offset and syntax information to a new Trace enum,
which has several advantages:
* Traces can now easily be stored separately, while the tokens are
fed to the tree builder. (The tree builder only has to keep track
of which tree nodes originate from which tokens.)
* No needless generics for functions that take a token but don't
care about offsets (a tree construction implementation is bound
to have many of such functions).
* The FromIterator<(String, String)> impl for AttributeMap no longer
has to specify arbitrary values for the spans and the value_syntax).
* The PartialEq implementation of Token is now much more useful
(since it no longer includes all the offsets).
* The Debug formatting of Token is now more readable
(since it no longer includes all the offsets).
* Function pointers to functions accepting tokens are possible.
(Since function pointer types may not have generic parameters.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
See the previous commit.
|
|
|
|
|
|
The method never really made much sense since
you could just as well use NaiveParser::new.
|
|
|
|
While the end-of-file token can also be represented by None,
this is less clear than having an explicit variant. Especially when
it comes to tree construction, the spec explicitly has conditions
named "An end-of-file token", and it's nice if the code for tree
construction can match the spec text closely.
|
|
The HTML spec specifies that the tokenizer emits character tokens.
That html5gum always emitted strings instead was probably just done
to make the token consumption more convenient. When it comes to tree
construction character tokens are however actually more convenient
than string tokens since the spec defines that specific character
tokens should be ignored in specific states (and character tokens
let us avoid string manipulation for these conditions).
This should also make the DefaultEmitter more performant for cases
where you don't actually need the strings at all (or only a few)
since it avoids string allocations. Though I haven't benchmarked it.
|
|
This is done separately so that the next commit has a cleaner diff.
|
|
This commit separates the public API (the "Tokenizer")
from the internal implementation (the "Machine")
to make the code more readable.
|
|
|
|
|
|
|
|
Methods defined in another module don't have access to private fields,
so the function could very well have been implemented as a method.
|
|
An error isn't a token (in general and also according to the spec).
You shouldn't have to filter out errors when you're just interested
in tokens but most importantly having errors in the Token enum is
annoying when implementing tree construction (since the spec conditions
exhaustively cover all Token variants except Token::Error).
|
|
|
|
|
|
|
|
This is done separately so that the next commit has a cleaner diff.
|
|
The second next commit will move errors out of the Token enum
but we still want to be able to test that the spans of errors
are character encoding independent.
|