Rake to build a C application - c

I'm attempting to migrate a C application I have been working on to use Rake insead of GNU Make. The file tree is something like:
project
├── LICENSE.md
├── Makefile
├── Rakefile
├── README.md
└── src
├── debug.h
├── main.c
├── queue.c
├── queue.h
└── ui
├── ui.c
└── ui.h
I want to build each file in a separate build directory and generate the dependencies of each .c file with either gcc or clang in a deps directory.
I cannot seem to find any examples of how to write a Rakefile to compile a C project. Does anyone have a link or some advice to help me get started?
EDIT: I have a temporary Rakefile that accomplishes some of the tasks I want it to eventually do. I would like to detect if clang is installed and use clang or gcc.
Here is my current Rakefile https://github.com/Rostepher/spoticli/blob/master/Rakefile

Use Ceedling!
https://github.com/ThrowTheSwitch/Ceedling
It's an automated unit test framework for C, but does a great job of simply building as well. You should be able to get what your looking for by editing a YAML config file, and you'll get Rake tasks for building out-of-the box. Of course you can create your own custom Rake tasks as well.
As a bonus you can create mocks with CMock and automated unit tests with Unity.
It's super easy to install from a gem:
> gem install ceedling

If you're trying to generate a native extension so you can compile a Ruby interface to C code this is a different issue, but I think what you are looking for is how to use mkmf from rake. While you could bare-knuckle a rakefile into acting like make for a C project it would be a lot of extra effort that tools like mkmf have already done for you. Take a look at:
http://ruby-doc.org/stdlib-2.0.0/libdoc/mkmf/rdoc/MakeMakefile.html
Basically you'll just pass the arguments needed to mkmf and it will generate a makefile for you and run it. The documentation here says it's for building extensions but if you don't write any extension code that's irrelevant and it will just compile your code like a raw makefile would.
As a side note: If you're looking for a more dynamic make tool or a tool to dynamically generate makefiles CMake would be a better option than putting together a hackish rake build.

Related

python package equivalent in C

I'm kind of a beginner in c, and I would like to know if there's a way of making a package like in python, for example:
.
├── main.py
└── pkg
├── file1.py
├── file2.py
└── __init__.py
how would that look in c?
I'd imagine something like:
.
├── main.c
└── pkg
├── a.c
├── a.h
├── b.c
└── b.h
is it the way? if so how would that work? how would I use the stuff inside it?
There is nothing like this exact thing in C, it does not care about packages.
When you want to distribute a "package" you can build it as library, delivering the header files and precompiled libraries (static or dynamic, per OS and architecture, sometimes per compiler)
If you want to organize a big project into packages, just go ahead like your layout - your tools won't really care. You'd include headers of such "packages" by relative path from where you use it. Compilation depends on your toolchain, but generally you have one big project with all the sources.

How to install third party libraries in a directory and include them as standard C headers

I'm used to got something like npm i in nodejs, or pip install in python to install/add 3rd party libraries but C has nothing of this.
Say I want to use this orca library and use its headers as I'd use any other standard C headers like including stdlib (I want to be able to include them with <> instead of "") I dont get its build instructions,
How would I get autocompletion in vim (I use clang and coc-clangd for C autocompletion) from these headers?
How would the compiler know where these headers are located in? As first I thought adding them to the include dir in linux but didnt seem right
If I wanted to make a C library for others to use, how would I set up everything for them to use it.
If I wanted to use my own headers as standard C ones for my directory, how would do I so? Say I've got the following dir structure
├── CMakeLists.txt
└── src
├── include
│   ├── header1.h
│   └── header2.h
├── implement1.c
├── main.c
└── implement2.c
And in the main.c I want to include header1.h as so...
#include <header1.h>
int main() {
header1Somefunction();
return 0;
}
I use cmake for linking c executables.
Some time ago, I had kind of same problem creating a C addon for nodejs, with napi.h header file, where I couldn't include it as a standard header (which was in node_modules/node-addon-api) so I had to create that addon without autocompletion reading examples and docs and instead of napi.h I ended up using node/node_api.h, and I wasn't even able to include it as just node_api.h but node/node_api.h.

How to compile TypeScript/Less files into JavaScript/CSS by using webpack and Babel?

Currently, I am developing a package and going to publish it in npm, I wrote it with TypeScript but faced some problems in package bundling.
I put my codes (TypeScript/Less files) in src and the structure shows below:
src
├── components
│   ├── table.less
│   └── table.tsx
├── index.tsx
└── utils
└── mock.tsx
Since I want to publish it in npm, so I need it be compiled to JavaScript/CSS files (in lib folder) so that other developers can import it directly (without extra compiling in their project), the structure should be like this:
lib
├── components
│   ├── table.css
│   └── table.js
├── index.js
└── utils
└── mock.js
But I faced some problems:
If I use tsc command, tsx files can be compiled to js files rightly, but less files will be ignored;
If I use webpack commands rather that tsc, the result will be bundled in one file, and lost it's original structure, and it will confuse package users;
I think I need to make it works by:
Compile all files from src to lib one by one(Keep the same folder structure);
tsx files to js files;
less files to css files;
add declaration files such as index.js.d.ts and index.css.d.ts;
modify some writing styles such as import styles from './index.less' to import styles from './index.css'; Or inject stylesheets into js files directly; (I am not sure about this step)
Bundle one js file with all of things in it (with webpack), as well as minimized version;
The package contains JSX grammar since I used React in it.
As I know, I need to use Babel in compiling TS/JS codes, and webpack in compiling less and other assets, but I am confused about how to combine them in working together.
So any suggestions on how to combine cool tools in solving my problem? I looked through really a lot tutorials but most of them are React/Less/TypeScript Project (not package development) or TypeScript package (without using less/css).
Thanks really a lot.
This question has been around for a long time, I hope my answer can still help people who click in to find the answer:
As you said, .tsx files can be compiled by tsc into .ts files, but .scss or .less not. so you need to use node-sass or less to process them.
Based on your directory structure above, you can write the following commands under scripts in package.json:
if you use less:
"scripts": {
"build:css": "lessc src/components/table.less lib/components/table.css"
}
or node-sass:
"scripts": {
"build:css": "node-sass -r src -o lib"
}
Yes, node-sass scans the root folder and automatically compiles the .scss files in it into .css files in the corresponding directory. Using less may require you to spend more time exploring the useful methods.
But just converting to css files doesn't help, because the style files introduced in the js files still have .less or .scss suffixes, we need to replace this part of the .js file, you may think of using node to read each .js file, and then use the regular match, this idea is generally no problem, but the global matching of the regular replacement can cause some hidden problems (although almost impossible), so I use AST to do the replacement.
Yes, I wrote an easy-to-use command line tool for this, you just need to:
npm i tsccss -D
and add npm script:
"scripts": {
"build:css": "node-sass -r src -o lib",
"compile": "tsc -p tsconfig.json && tsccss -o lib",
"build": "npm run build:css && npm run compile"
}
Yes, as you can see, tsccss -o lib is completely enough.
Hope you enjoy it, here is github/repo: tsccss

How do I make vendoring work with Google App Engine?

I am trying to introduce Go vendoring (storing dependencies in a folder called vendor) to an existing App Engine project. I have stored all dependencies in the vendor folder (using Godep as a helper) and it looks right, but running the application locally I get the following error:
go-app-builder: Failed parsing input: package "golang.org/x/net/context" is imported from multiple locations: "/Users/erik/go/src/github.com/xyz/abc/vendor/golang.org/x/net/context" and "/Users/erik/go/src/golang.org/x/net/context"
I believe the two locations should resolve to the same location, as Go applications should look in the vendor folder first. Is there a way to make Appengine understand that both dependencies are the same?
Your project directory (where app.yaml is) is probably in the GOPATH/src.
It shouldn't be.
The go-app-builder will take everything in the app.yaml folder (and below) and additionally merge your GOPATH into it, meaning now you have it twice.
The solution is to move app.yaml out of the GOPATH/src folder.
Additionally you'll find that goapp test works differently from goapp serve and goapp deploy when it comes to resolving dependencies.
So this is the solution I have been using (haven't used golang app engine in a while already) and it's the only setup I've found to work properly for all the goapp commands and for govendor to work properly (not sure about godep)
/GOPATH
├──/appengine
| ├── app.yaml
| └── aeloader.go
└──/src
└── /MYPROJECT
├── main.go
├── /handler
| └── handler.go
└── /vendor
details:
file: GOPATH/appengine/aeloader.go (NOTE the init function is necessary, probably a bug though)
package mypackage
import (
_ "MYPROJECT"
)
func init() {
}
now run goapp serve and goapp deploy from ../GOPATH/appengine/ and goapp test ./... from ../GOPATH/src/MYPROJECT
P.S. I find the global GOPATH thing silly and simply set my GOPATH to current project folder (in the example above /GOPATH) and check the whole thing into version control.
I use a Makefile to move the vendor directory to a temporary GOPATH:
TMPGOPATH := $(shell mktemp -d)
deploy:
mv vendor $(TMPGOPATH)/src
GOPATH=$(TMPGOPATH) gcloud app deploy
mv $(TMPGOPATH)/src vendor
I store this Makefile at the root of my service near the vendor directory and simply use make deploy to deploy manually or from the CI.
It works with Glide, Godeps or any tool that respects the Go vendor spec.
Please note, that you really need to move the vendor directory out of the build directory, otherwise the GoAppEngine compiler will try to build the vendor dependencies, potentially causing compile errors.
I just ran into this issue myself actually. The problem occurs when you're using the App Engine tools to build any package which imports something that is using vendoring, but the package you're trying to run doesn't have the import within it's vendor directory.
So, for example, if I'm trying to run package foo, which imports package bar, and both of which use the github.com/gorilla/mux library, if the bar repository has a vendor/ directory that contains gorilla/mux, but the foo package doesn't have gorilla mux in it's vendor/ directory, this error will occur.
The reason this happens is that the bar package will prioritize it's own vendor package over the one in the GOPATH, which is what foo will be using, causing a difference in the actual location of the imported paths.
The solution I found to this issue is to make sure that the foo directory is in the GOPATH and has the vendor directory properly installed. It's important to note that the vendor/ convention only works from within the GOPATH.
I managed to resolve this error using govendor instead of Godeps. The root cause appears to have been that vendored references with their own vendored references was not resolved correctly by Godeps.
The answer provided by Su-Au Hwang is also correct - you do have to keep app.yaml separate from your source.
Also got the same problem.
In the docs Google suggests the following:
For best results, we recommend the following:
Create a separate directory in your app's directory for each service.
Each service's directory should contain the service's app.yaml file and one or more .go files.
Do not include any subdirectories in a service's directory.
Your GOPATH should specify a directory that is outside your app's directory and contain all the dependencies that your app imports.
But this messes up my project structure, which looks like this:
GOPATH/
└── src
└── github.com
└── username
└── myproject
├── app.yaml
├── cmd
│   └── myproject
│   └── main.go
├── handlers
│   └── api.go
├── mw
│   ├── auth.go
│   └── logger.go
└── vendor
Where the myproject directory is a git project and the vendor folder contains all dependencies.
Running gcloud deploy from the myproject directory where app.yaml file lives doesn't work because first, main.go file is not in the same directory and second (from the same doc):
you must be careful not to place your source code at or below your app's directory where the app.yaml file is located
What I ended up doing is building my own custom runtime instead, which turned out to be a very clean solution.
Simply generate the Dockerfile with the following command:
gcloud beta app gen-config --custom
Modify it, then specify runtime: custom in your app.yaml and deploy normally.
The trick here is of course that you're in control what gets copied where.
Here is my Dockerfile:
# Dockerfile extending the generic Go image with application files for a
# single application.
FROM gcr.io/google-appengine/golang
ENV GOPATH /go
# The files which are copied are specified in the .dockerignore file
COPY . /go/src/github.com/username/myproject/
WORKDIR /go/src/github.com/username/myproject/
RUN go build -o dist/bin/myproject ./cmd/myproject
# All configuration parameters are passed through environment variables and specified in app.yaml
CMD ["/go/src/github.com/username/myproject/dist/bin/myproject"]
Don't forget that App Engine expects your application listening on port 8080. Check out Building Custom Runtimes doc for more details.

Is there a better way to structure a C project with cmake?

I'm starting a new C project using CMake, so I created a directory structure very similar to the ones I use in Python (my "main" language). Although it compiles correctly, I'm not certain I'm doing it the right way. This is the current structure:
.
├── CMakeLists.txt
├── dist
│   └── # project will be built here, 'cmake ..'
├── extras
│   ├── CMakeLists.txt
│   ├── extra1
│ │   ├── CMakeLists.txt
│ │   ├── extra1.h
│ │   └── extra1.c
│   └── extra2
│    ├── CMakeLists.txt
│    ├── extra2.h
│    └── extra2.c
├── src
│   ├── CMakeLists.txt
│   ├── main.c
│   ├── module1.h
│   ├── module1.c
│   ├── module2.h
│   └── module2.c
└── test
├── CMakeLists.txt
├── test_module1.c
└── test_module2.c
Since all files are distributed across multiple directories, I had to find a way to locate the libraries present in extras and the ones I need to test in src. So, these are my CMakeLists':
./CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
project(MyProject)
add_definitions(-Wall -std=c99)
# I don't know really why I need this
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/dist)
add_subdirectory(src)
add_subdirectory(test)
add_subdirectory(extras)
enable_testing()
add_test(NAME DoTestModule1 COMMAND TestModule1)
add_test(NAME DoTestModule2 COMMAND TestModule2)
./src/CMakeLists.txt
macro(make_library target source)
add_library(${target} ${source})
target_include_directories(${target} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
endmacro(make_library)
make_library(Module1.o module1.c)
make_library(Module2.o module2.c)
./test/CMakeLists.txt
macro(make_test target source library)
add_executable(${target} ${source})
target_link_libraries(${target} Libtap.o ${library})
target_include_directories(${target} PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
endmacro(make_test)
make_test(TestModule1 test_module1.c Module1.o)
make_test(TestModule2 test_module2.c Module2.o)
./extras/CMakeLists.txt
# Hopefully you'll never need to change this file
foreach(subdir ${SUBDIRS})
add_subdirectory(${subdir})
endforeach()
(Finally) ./extras/libtap/CMakeLists.txt
add_library(Libtap.o tap.c)
target_include_directories(Libtap.o PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
So, now the question: the reason I'm worried is that this setup will create a 'public' library for every file I'm using, including extra libs (which are not meant to be distributed). If I have 10 libraries in src, 4 dependencies in extras (including libtap, which I'm using to test) and at least the same amount of test files, I'll end up with 24 compiled artifacts.
Is there any better way to expose libraries to linking?
I'm not compiling "main" yet, what would be the right configuration for that?
is add_definitions the right way to add flags to the compiler?
How can I make this structure more DRY?
Is there any better way to expose libraries to linking?
No, this seems fine.
You might however want to reconsider the granularity at which you create static libraries. For example, if all applications except the tests will only ever use Module1 and Module2 in combination, you might want to merge them into a single library target. Sure, the tests will link against parts of the component that they do not use, but that is a small price to pay for the decrease in build complexity.
I'm not compiling "main" yet, what would be the right configuration
for that?
There's nothing wrong with adding it to the src/CMakeLists.txt as well:
add_executable(my_main main.c)
target_link_libraries(my_main Module1.o Module2.o)
is add_definitions the right way to add flags to the compiler?
It can be used for that purpose, but might not be ideal.
Newer CMake scripts should prefer the target_compile_options command for this purpose. The only disadvantage here is that if you want to reuse the same compile options for all targets in your projects, you also have to do the same target_compile_options call for each of those. See below for tips on how to resolve that.
How can I make this structure more DRY?
First of all, unlike most program code, redundancy is often not that big an issue in build system code. The notable thing to look out for here is stuff that gets in the way of maintainability. Getting back to the common compiler options from before: Should you ever want to change those flags in the future, it is likely that you want to change them for every target. Here it makes sense to centralize the knowledge about the options: Either introduce a function at the top-level that sets the option for a given target, or store the options to a global variable.
In either case you will have to write one line per target to get the option, but it will not generate any maintenance overhead after that. As an added bonus, should you actually need to change the option for only one target in the future, you still have the flexibility to do so.
Still, take care not to overengineer things. A build system should first and foremost get things done.
If the easiest way to set it up means you copy/paste a lot, go for it! If during maintenance later it turns out that you have some real unnecessary redundancies, you can always refactor.
The sooner you accept the fact that your CMake scripts will never be as pretty as your program code, the better ;)
One small nitpick at the end: Avoid giving your target names extensions. That is, instead of
add_library(Libtap.o tap.c)
consider
add_library(Libtap tap.c)
CMake will automatically append the correct file ending depending on the target platform anyway.

Resources