Binary Persistence of Geom_BoundedCurve

Hello,

I am using OCAF and I am creating documents with attributes of our own types and for that I am using our own presentation drivers. Some of my attributes contain instances of Geom_BoundedCurve and I would like to add them to the persistence mechanism. How can I store Geom_BoundedCurves to binary OCAF documents?

There are two possibilities that I am considering:

1. Replace all my function arguments "Geom_BoundedCurve" with "Adaptor3d_Curve" and use GeomAdaptor_Curve instead of Geom_BoundedCurve. Then while writing the OCAF document to disk, create a TopoDS_Edge out of every Geom_BoundedCurve and store it as a TNaming_NamedShape attribute to the same label as its parent attribute. When retrieving it, use a BRepAdaptor_Curve to be able to call the necessary functions again after retrieval.
2. Use BinTools_CurveSet to "convert" each Geom_BoundedCurve to a string, store the string with BinObjMgt_Persistent::PutCString(...) and when retrieving the curve, convert the string to a curve using again the methods of BinTools_CurveSet.

Both possibilities do not seem that elegant. Is there another possibility? Is there a standard way to store Geom_Curves to OCAF documents, that I am not aware of?

Thank you very much!
Benjamin

Benjamin Bihler's picture

I have managed to read and write curves.

Here is my code for reading curves:

Standard_Integer curveBytesSize;

// ... read in count of bytes

std::vector<Standard_Byte> curveBytes(curveBytesSize, 0);

// Read byte array from BinObjMgt_Persistent
const Standard_Boolean success =
		source.GetByteArray(curveBytes.data(), curveBytesSize);

if (!success)
{
	// Error message...
	return Standard_False;
}

std::stringstream stream;

for (const Standard_Byte& byte : curveBytes)
{
	stream.put(byte);
}

Handle(Geom_BoundedCurve) value;

BinTools_CurveSet::ReadCurve(stream, value);

And here is my code for writing:

std::stringstream stream;
BinTools_CurveSet::WriteCurve(curve, stream);

std::vector<Standard_Byte> curveBytes;

while (!stream.eof())
{
	curveBytes.push_back(stream.get());
}

// Write everything to BinObjMgt_Persistent
target.PutInteger(curveBytes.size());
target.PutByteArray(curveBytes.data(), curveBytes.size());

I have used that, but the size of the binary documents is extremely large, even though my curves are actually just some straight lines. I wonder whether there is a possibility to reduce the size. Does Open CASCADE contain already algorithms for that? Is there an easy way to compress byte arrays containing curve data?

Benjamin

Kirill Gavrilov's picture

Benjamin, are you looking for the ways to compress files size (e.g. not utilization of data in memory)? Or for some better structures / approaches to store the same data?

I don't recall existing drivers in OCAF using some compression, so I guess either you may apply compression to the result file (suboptimal, of course) or introduce such feature in custom driver OCAF / propose patch to OCCT itself.

There is a plenty of compression options to consider - zlib, lzma, lz4, quantization (lossy compression of floating point values) each having pros and cons (compression ratio, compression time, decompression time, lossy/lossless, etc.).

Benjamin Bihler's picture

Thank you for your answer, Kirill. Yes, I would like to have files that are not extraordinarily large. And I would like to stay as close as possible to the Open CASCADE data types, i.e. curve evaluation results should not be different after storing and retrieving a curve.

Storing 6000 straight curves using BinTools_CurveSet as shown above for example takes 40 MByte of hard disk space. I wonder whether this is excessive or okay!?

I have managed to use Boost.Iostreams to compress the curve data like this:

Standard_Boolean PersistableAttributeDriver::readCurve(
		const BinObjMgt_Persistent& source, const std::string& readableName,
		Handle(Geom_BoundedCurve)& value) const
{
	Standard_Integer curveBytesSize;

	{
		const Standard_Boolean success = source >> curveBytesSize;

		if (!success)
		{
			TCollection_ExtendedString message = "The data size of ";
			message += UnicodeUtilities::toExtendedString(readableName);
			message += " could not be read.";
			myMessageDriver->Send(message, Message_Fail);

			return Standard_False;
		}
	}

	if (0 == curveBytesSize)
	{
		value.Nullify();

		return Standard_True;
	}

	std::vector<Standard_Byte> curveBytes(curveBytesSize, 0);

	const Standard_Boolean success =
			source.GetByteArray(curveBytes.data(), curveBytesSize);

	if (!success)
	{
		TCollection_ExtendedString message =
				UnicodeUtilities::toExtendedString(readableName);
		message += " could not be read.";
		myMessageDriver->Send(message, Message_Fail);

		return Standard_False;
	}

	std::stringstream stream;

	for (const Standard_Byte& byte : curveBytes)
	{
		stream.put(byte);
	}

	boost::iostreams::filtering_streambuf<boost::iostreams::input>
			filteringBuffer;
	filteringBuffer.push(boost::iostreams::zlib_decompressor());
	filteringBuffer.push(stream);

	std::stringstream decompressed;
	boost::iostreams::copy(filteringBuffer, decompressed);

	BinTools_CurveSet::ReadCurve(decompressed, value);

	return Standard_True;
}

void PersistableAttributeDriver::writeCurve(
		BinObjMgt_Persistent& target, const Handle(Geom_BoundedCurve)& curve) const
{
	if (curve.IsNull())
	{
		target.PutInteger(0);

		return;
	}

	std::stringstream stream;
	BinTools_CurveSet::WriteCurve(curve, stream);

	boost::iostreams::filtering_streambuf<boost::iostreams::input>
			filteringBuffer;
	filteringBuffer.push(boost::iostreams::zlib_compressor());
	filteringBuffer.push(stream);

	std::stringstream compressed;
	boost::iostreams::copy(filteringBuffer, compressed);

	std::vector<Standard_Byte> curveBytes;

	while (!compressed.eof())
	{
		curveBytes.push_back(compressed.get());
	}

	target.PutInteger(curveBytes.size());
	target.PutByteArray(curveBytes.data(), curveBytes.size());
}

This reduces the file size with 6000 curves from 40 MByte to 17 MByte.

I wonder whether my curve creation algorithm is bad. If I use a B-spline interpolation of gp_Pnts (to be general), and the points happen to lie all on a straight line, but my algorithm has chosen (much?!) more than two spline support points, then probably the curve data is larger than absolutely necessary...? Is there an easy way to reduce "unnecessary" information during B-spline interpolation...?

Benjamin