which kind of library does TDengine database use to serialization and deserialization? - tdengine

which kind of library does TDengine database use for serialization and deserialization?
I read some code, I found they use htonl/s and ntohl/s.
how about others?

Related

What data sources Snowflake support?

As mentioned in the title, I'd like to know what data sources Snowflake supports. I'm not completely sure how to even approach this question. I know you can create an external stage in the cloud storage of supported cloud providers, but what if I want to load data from the Oracle database, for example. What's the best solution in that case, is it to use the ODBC driver, or?
And please feel free to give me any suggestions, or advice on where to continue my research. Also, let me know if any part of my question is unclear so that I can rephrase it :)
Snowflake natively supports AVRo, Parquet, CSV, JSON and ORC. These are landed in a stage for ingestion --- your ELT/ETL tool of choice or even a home-built application must land the data in a stage, either internal or external.
That file is then ingested into Snowflake utilizing a COPY command either automated by said tool or using something like Snowpipe.
We have documentation on Firehose / Kafka pipelines landing data for Snowpipe to ingest either through AUTO_INGEST notifications (limited to external stage) or calling our REST API.
All supported by our documentation, simply google the terms I have mentioned and there will be tons of documentation
Multiple existing ETL Tools allow to define Snowflake as destination, supporting a wide variety of sources.
Native Programmatic Interfaces
Snowflake Ecosystem - Data Integration

Advantages of netezza client tools over aginity

My team recently started working with Netezza. I'm responsible for loading data into the database in the most efficient manner. They want me to look into things such as automating the loading of data and more.
Right now I'm using Aginity as an interface to load data but I'm wondering if there are any advantages of using Netezza Client tools (with nzload and more) instead of Aginity whether it's for loading data or anything else. When should I use one over the other?
Aginity is nice for exploration and code development.
IMHO you’ll need a proper (but lightweight) scripting language to do any kind of automated loading/extraction/manipulation of data.
Python, bash, powershell - doesn’t really matter.
Automation requires error handling and simple decision making combined with the ability to manipulate sql statements dynamically, and all scripting languages can do that.
Wether you call nzsql as a command line utility from that tool or uses an ODBC or JDBC capability in said scripting language is not of any consequence either.

Database access in C

I am developing an application completely written in C. I have to save data permanently somewhere. I tried file storage but I feel its really a primitive manner to do the job and I don't want to save my sensitive data in a simple text file. How can i save my data and access it back in an easy manner? I come from JavaScript background and would prefer something like jsons. I will be happy with something like postgreSQL also. Give me some suggestions. I am using gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3.
sqlite seems to meet your requirements.
SQLite is an embedded SQL database engine. Unlike most other SQL
databases, SQLite does not have a separate server process. SQLite
reads and writes directly to ordinary disk files. A complete SQL
database with multiple tables, indices, triggers, and views, is
contained in a single disk file. The database file format is
cross-platform - you can freely copy a database between 32-bit and
64-bit systems or between big-endian and little-endian architectures.
These features make SQLite a popular choice as an Application File
Format. Think of SQLite not as a replacement for Oracle but as a
replacement for fopen()
Check out the quickstart
http://www.postgresql.org/docs/8.1/static/libpq.html
libpq is the C application programmer's interface to PostgreSQL. libpq is a set of library functions that allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these queries.
I would recommend SQLite. I think it is a great way of storing local data.
There are C library bindings, and its API is quite simple.
Its main advantage is that all you need is the library. You don't need a complex database server setup (as you would with PostgreSQL). Also, its footprint is quite small (it's also used a lot in mobile development world {iOS, android, others}).
Its drawback is that it doesn't handle concurrency that well. But if it is a local, simple, single-threaded application, then I guess it won't be a problem.
MySQL embedded or BerkeleyDB are other options you might want to take a look at.
SQLite is a lightweight database. This page describes the C language interface:
http://www.sqlite.org/capi3ref.html
SQLite is a software library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. SQLite is the most widely deployed SQL database engine in the world. The source code for SQLite is in the public domain.
SQLite is a popular choice because it's light-weight and speedy. It also offers a C/C++ interface (including a bunch of other languages).
Everyone else has already mentioned SQLite, so I'll counter with dbm:
http://linux.die.net/man/3/dbm_open
It's not quite as fancy as SQLite (e.g, it's not a full SQL database), but it's often easier to work with from C, as it requires less setup.

In-memory database using Data Structures and Algorithms specifically in C Programming Language

How do you go about creating an in-memory database using structures specifically in C programming?
You might try looking over the code for sqlite:
http://www.sqlite.org/
It is implemented in C and provides a single-process SQL database backend. It can support an "in-memory" mode out of the box:
http://sqlite.org/c3ref/open.html
The sqlite code is quite compact and there is a fair amount of API documentation on the website. It might give you a useful case study for your own work.

Using Haskell with a database backend for "business applications"

I would like to know if there is any possibility that I can use Haskell with small database like sql server compact so that client wont have to install any server on his desktop.
Is there any api providing sql statements and so on ...
What is the best solution to achieve small database application using haskell.
thanks for help
SQLite is a great option for a small, lightweight database you can embed in your application. See HackageDB for a Haskell binding.
There are 57 database libraries and tools for Haskell on Hackage. The most popular is HDBC, an order of magnitude more popular than anything else, and has the HDBC-sqlite backend.
I would definitely recommend SQLite. If you are looking for a library to help keep the type safety of Haskell with a concise syntax, I would recommend checking out Persistent, which has a SQLite backend.

Resources