I read the document and can only find the retention policy of TDengine. Is there a way to delete a range of data?
Currently TDengine 2.x version does support deleting specified a range of data. The only way to delete data is to set the "keep" option in config with a strategy to eliminate out-dated data if data storage is longer than keep.
Delete will be supported in next release, you can clone TDengine and checkout develop branch, build TDengine by yourself.
the grammar is like
delete from stb where ts > timestamp and tag = tagvalue
delete from tb
Related
In datastream api we have an argument withInactivityInterval, which can set our available interval until the file is closed.
But there is no such thing in the table api, and if our datastream to the table is suspended for a few seconds, the table api will close the file, and after a pause it will start a new one. Is there any way to avoid this?
And how we can set suffix in table api?
I have submitted a pr, if nothing else, it should be supported in version 1.15
https://github.com/apache/flink/pull/18359
Confirmed the current implementation of table api, it really does not support setting inactivityInterval
I created a jira issue,then follow up, thanks for your feedback
https://issues.apache.org/jira/browse/FLINK-25484
I just updated a single document on mongo and my transaction wrong and i lost the previous data. Is there any way to get the data before making the update?
If you have written the document you are trying to restore recently, and you are using a replica set, you should be able to extricate the previous version of the document out of the oplog. Start here.
Atlas provides a point in time restore feature.
I am designing a SOLR schema for my project, and will create the fields using the Schema-API.
I will likely need to add new fields to the schema in the future.
With SQL databases, I usually store a schema version number in a well-known table. Then when my app starts up, it checks to make sure the database schema is current. If not, I execute all of the needed updates (which are numbered) to bring it up to date.
How can I achieve this with SOLR using the schema-api? Specifically, how/where could I store and retrieve a version number?
My current workaround/solution is to store the version in the name of a SOLR field, for example by creating a field called "schema_version_2". When my app starts up, I retrieve the list of fields using the schema-api and iterate over them, looking for a field called "schema_version_XX".
Then I can determine whether I need to apply any upgrades to the SOLR schema for my app. If necessary, my app upgrades to the latest schema version (typically adding/modifying fields). At the end, I increment the version, for example by deleting the "schema_version_2" field and creating a new field called "schema_version_3".
I would still like to know what pattern and solution developers with more SOLR experience use to solve this problem.
I remove document in CouchDB by setting the _deleted attribute to true (PUT method). The last revision of document is deleted but previous revision is still available.
And when I pull documents of specific type from database, this document is still available.
How I should delete document to not be available?
I use synchronization between CouchDB on the server and PouchDB instances on mobile applications (Ionic).
You need to compact your database. Compaction is a process of removing unused and old data from database or view index files, not unlike vacuum in RDBMS. It could be triggered by calling _compact end-point of a database, e.g. curl -X POST http://192.168.99.100:5984/koi/_compact -H'Content-Type: application/json'. After that the attempts to access the previous revisions of a deleted document should return error 404 with a reason missing.
Note that the document itself not going to completely disappear, something called "tombstone" will be left behind. The reason is that CouchDB needs to track deleted documents during replication to prevent accidental document recovery.
I'm using solr search engine.I'm new to this. I want to update data automatically every time when my database getting update or new data created in the tables.I tried delta import and full import.In these method I have to do it manually when ever I need to update.
Which way is best for update solr document.?
How to make it automatically?
Thanks for your help.
There isn't a built in way to do this using Solr. I wouldn't recommend running a full or delta import when just updating one row in a table. What most Solr deployments do with a database is update the corresponding document when updating a row. This will be application specific, but this is the most efficient and standard way of dealing with this issue.
Using full or delta imports would be something that would run nightly or every few hours typically.
So, basically you want to process document before adding in solr.
This can be achieved by adding new update processor in update process chain you can go through : Solr split joined dates to multivalue field.
Here they split data in a field and saved it as multi valued field