Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We'll reuse the field for another offset in the next commit.
|
|
|
|
With codespan_reporting an empty span shows up exactly like a
one-byte span, which is why I didn't notice this mistake earlier.
|
|
|
|
|
|
Conceptually the tokenizer emits tokens, which are then handled in the
tree construction stage (which this crate doesn't yet implement).
While the tokenizer can operate almost entirely based on its state
(which may be changed via Tokenizer::set_state) and its internal state,
there is the exception of the 'Markup declaration open state'[1], the third
condition of which depends on the "adjusted current node", which in turn
depends on the "stack of open elements" only known to the tree constructor.
In 82898967320f90116bbc686ab7ffc2f61ff456c4 I tried to address this
by adding the adjusted_current_node_present_and_not_in_html_namespace
method to the Emitter trait. What I missed was that adding this method
to the Emitter trait effectively crippled the composability of the API.
You should be able to do the following:
struct TreeConstructor<R, O> {
tokenizer: Tokenizer<R, O, SomeEmitter<O>>,
stack_of_open_elements: Vec<NodeId>,
// ...
}
However this doesn't work if the implementation of SomeEmitter
depends on the stack_of_open_elements field.
This commits remedies this oversight by removing this method and
instead making the Tokenizer yield values of a new Event enum:
enum Event<T> { Token(T), CdataOpen }
Event::CdataOpen signals that the new Tokenizer::handle_cdata_open
method has to be called, which accepts a CdataAction:
enum CdataAction { Cdata, BogusComment }
the variants of which correspond exactly to the possible outcomes
of the third condition of the 'Markup declaration open state'.
Removing this method also has the added benefit that the DefaultEmitter
is now again spec-compliant, which lets us expose it again in the next
commit in good conscience (previously it just hard-coded the method
implementation to return false, which is why I had removed the
DefaultEmitter from the public API in the last release).
[1]: https://html.spec.whatwg.org/multipage/parsing.html#markup-declaration-open-state
|
|
|
|
|
|
|
|
It doesn't make sense that you're able to construct
a Tokenizer/NaiveParser that you're unable to iterate over.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Also more performant since we no longer have to update
the name span on every Emitter::push_tag_name call.
|
|
Previously the PosTrackingReader always mysteriously subtracted 1
from the current position ... this wasn't sound at all ... the machine
just happens to often call `Tokenizer::unread_char` ... but not always.
E.g. for proper comments it didn't which resulted in their offset and
spans being off-by-one, which is fixed by this commit (see test_spans.rs).
|
|
Emitters should not have access to the reader at all. Also the
current position of the reader, at the time an Emitted method is
called, very much depends on machine implementation details such
as if `Tokenizer::unread_char` is used. Having the Emitter
methods take offsets lets the machine take care of providing
the right offsets, as evidenced by the next commit.
|
|
|
|
`std::mem::size_of::<Range<NoopOffset>>()` is 0
so there's no need to abstract over Range.
|
|
Previously Span was generic over R just
so that it could provide the method:
fn from_reader(reader: &R) -> Self;
and properly implementing that method again
relied on R implementing the Position trait:
impl<P: Position> Span<P> for Range<usize> { .. }
which was a very roundabout and awkward way of doing things.
It makes much more sense to make the Position trait generic
over the return type of its method (which previously always had
to be usize). Which lets us provide a blanket implementation:
impl<R: Reader> Position<NoopOffset> for R { .. }
|
|
|
|
|
|
This is primarily done to make the rustdoc more readable
(by grouping Reader, IntoReader, StringReader and BufReadReader
in the reader module). Ideally IntoReader is already implemented
for your input type and you don't have to concern yourself
with these traits / types at all.
|
|
The Tokenizer does not perform any state switching, since
proper state switching requires a feedback loop between
tokenization and DOM tree building. Using the Tokenizer
directly therefore is a bit of a pitfall, since you might
not expect it to e.g. tokenize `<script><b>` as:
StartTag(StartTag { name: "script", .. })
StartTag(StartTag { name: "b", .. })
Since we don't want to make walking into pitfalls
particularly easy, this commit changes the Tokenizer::new
method so that you have to specify the Emitter.
Since this makes new_with_emitter redundant it is removed.
|
|
|
|
|
|
The trait of the standard library is also
called IntoIterator and not Iterable.
|
|
dced8066f77f570dd3e396ec3570c71aa86c454e introduced a Readable impl for
std::io::BufReader. Manually listing impls in a doc comment is a bad idea
since such lists will just get out of date and there's no need for that
since rustdoc automatically lists all implementations on the trait page.
|
|
|
|
|
|
|
|
|
|
ScriptData states
|
|
|
|
|
|
|
|
purpose: don't want to expose self.to_reconsume to the consume() method
|