Age | Commit message (Collapse) | Author |
|
After move some nodes in the hash table can have keys that point to
other; this makes the table somewhat larger but this does not impact
correctness.
The reason is that for us to access a key in the hash table, there
should be a compact_pointer/string object with the state indicating that
it is stored in a hash table, and with the address matching the key. For
this to happen, we had to have put this object into this state which
would mean that we'd overwrite the hash entry with the new, correct
value.
When nodes/pages are being removed, we do not clean up keys from the
hash table - it's safe for the same reason, and thus move doesn't
introduce additional contracts here.
|
|
We now check that appending a child to a moved document performs no
allocations - this is already the case, but if we neglected to copy the
allocator state this test would fail.
|
|
Large test wasn't testing shared parent condition properly - add one
more level of hierarchy so that it works as expected.
|
|
Add a test that checks that static buffer pointer was moved correctly
by checking if offset_debug still works.
|
|
Make sure we have coverage for empty documents and for large documents
that trigger compact_shared_parent != root for some pages.
|
|
Verify that move doesn't allocate and that it preserves structures
required for tree memory management and append_buffer in tact.
|
|
|
|
These just verify that move ctor/assignment operator work as expected in
simple cases - there are a number of ways in which the internal
structure can be incorrect...
|
|
This change implements the initial version of move construction and
assignment support for documents.
When moving a document to another document, we always make sure move
target is in "clean" state (empty document), and proceed by relocating
all structures in the most efficient way possible.
Complications arise from the fact that the root (document) node is
embedded into xml_document object, so all pointers to it have to change;
this includes parent pointers of all first-level children as well as
allocator pointers in all memory pages and previous pointer in the first
on-heap memory page.
Additionally, compact mode makes everything even more complicated
because some of the pointers we need to update are stored in the hash
table (in fact, document first_child pointer is very likely to be there;
some parent pointers in first-level children will be using
compact_shared_parent but some won't be) which requires allocating a new
hash table which can fail.
Some details of this process are not fully fleshed out, especially for
compact mode; and this definitely requires many tests.
|
|
|
|
It has always been the case that pugixml does not perform Unicode
validation or name/tag Unicode character class validation, but it wasn't
very obvious from documentation.
Fixes #162
|
|
We support Latin-1 and automatically detect it by parsing the encoding
from document declaration; both of these were omitted from the
description of the automatic detection.
Additionally, the description has been rewritten to be more concise and
a bit more abstract - there's no need to specify the algorithm precisely
here.
Fixes #158.
|
|
Due to a typo in build script v141 binaries were built using VS2015
instead of VS2017.
Fixes #157.
|
|
Using LTCG restricts the resulting .lib files to a specific compiler
version, causing version conflicts when the compiler gets updated
without changing the toolset version. VS2017 now has two incompatible
compilers, 15.0 and 15.3, both of which use toolset v141...
|
|
Clang/C2 does not implement __builtin_expect; additionally we need to
work around deprecation warnings for fopen by disabling them.
|
|
Switch codecov.io URLs to https
|
|
|
|
These new tests test that tellg() can fail when being called the second
time, which leads to seekable implementation failing.
|
|
These tests simulate various error conditions when reading data from
streams - seeks failing in seekable streams, underflow throwing an
exception causing read to set badbit, etc.
This change also adjusts memory thresholds to cause a reliable out of
memory during construction of a final buffer for non-seekable streams.
|
|
|
|
This fixes missing coverage in translate_table_generate and
xpath_node_set_raw::append.
|
|
Hiding using namespace in common.hpp is somewhat surprising so remove
common.hpp and move using namespace into all .cpp files that need it.
|
|
Most tests have `using namespace pugi` which makes explicit
qualifications unnecessary.
|
|
It's not clear whether we still need PUGI__MSVC_CRT_VERSION, but it's
more consistent for now to use it for _snprintf_s since this is relying
on a CRT extension, not on a compiler feature.
|
|
These functions were deprecated via comments in 1.5 but never got the
deprecated attribute; now is the time!
Using deprecated functions produces a warning; to silence it, this
change moves the relevant tests to a separate translation unit that has
deprecation disabled.
|
|
Rework NuGet package building
|
|
Unify build paths in all MSBuild VS projects and extract common build
logic into functions.
Note that this change changes both VS2010 and VS2013 projects to have
more predictable output paths and fixed output file name (pugixml).
|
|
Also improve linkage description
|
|
We build NuGet package manually now so we don't need CoApp.
|
|
|
|
We'd like to build pugixml with both static & dynamic CRT and put it
all in one NuGet package.
CoApp sort of allows us to do this via dynamic/static pivots, but it
does not let us customize the names of the pivots and additionally has
some bugs with the project setup. Their project modifications are also
much more complicated - really, at this point we should do this
ourselves.
Create a simple native NuGet package with Linkage setting that picks the
right library, and package all libraries appropriately. Note that we use
the unified path syntax to make it simple to just get the right .lib
file from the toolset/platform/configuration/linkage combo.
|
|
The macro only works correctly when the input argument is an array with
a statically known size - pointers or arrays decayed to pointers won't
work silently.
While this is unlikely to surface issues that aren't caught in
tests/code review, use _countof for MSVC to prevent such code from
compiling.
|
|
Add VS2017 to AppVeyor test run
|
|
This requires moving the list of VS versions out of
autotest-appveyor.ps1 and into appveyor.yml.
|
|
Correctly check for error codes and don't run .bat file since it doesn't
work anyway (the variables it sets aren't accessible in PowerShell, and
the path to the script doesn't seem to be the same in VS2017).
|
|
VS2017 project + NuGet support
|
|
Add memory allocation failure test for concact with a very large list
and make sure we have every single axis covered with and without a
predicate, with and without a previous step.
|
|
Apparently only narrow character streams had out of memory coverage -
fix that and also split this into a separate test.
|
|
Cover both char and wchar_t stream loading in a single run instead of
using pugi::char_t.
|
|
Cover more failure cases and simplify the streambuf implementation a
bit.
|
|
Rename partition to partition3 to resolve conflicts with std::partition.
|
|
Add more memory allocation failure tests.
|
|
Adjust the buffer size to be right on the edge of the overflow, make
sure we actually output " instead of ".
|
|
This test triggers flush() condition for each optimized write() method.
|
|
Instead of branching code at each invocation site, use variadic macros
to create a wrapping macro that use snprintf for the buffer of a
statically known size.
Variadic macros are supported by all C++11 compilers, as is snprintf;
on MSVC 2005+ we don't necessarily have snprintf, but we can use
_snprintf_s with _TRUNCATE to get the same behavior. In all other cases
we fall back to sprintf, that (theoretically) can lead to a stack buffer
overflow.
In practice all snprintfs used in pugixml use buffers that should be
large enough to never be overflown but snprintf is safe even if this is
not the case.
|
|
We use references to arrays elsewhere in the codebase and there's just
one caller for this function so it's easier to fix the size.
This will simplify snprintf refactoring.
|
|
use snprintf instead of sprintf
|
|
Improve code coverage
|
|
codecov.io does not seem to support lcov regex customization;
additionally, we can't just replace unreachable with LCOV_LINE_EXCL
in gcov file - so we have to patch the ##### indicator (which suggests
the line hasn't been hit) with 1.
See also https://github.com/codecov/support/issues/144
|
|
Now we can exclude these from code coverage since it's logically
impossible to hit them in tests.
|