I18n strategies for Go with App Engine - google-app-engine

Not necessarily specific to GAE I suppose, but I'm curious as to what people are using to translate or localise their web applications.
My own approach I'm afraid is hopelessly naive, really just a hand-wave at the issue by loading an entity from the datastore for each package based on a locale value recorded in the user's profile. At least this allows translations of a few strings to be provided:
package foo
...
type Messages struct {
Locale string
ErrorDatastore string
LoginSuccessful string
...
}
Store with a string id corresponding to a locale, then load to Gorilla context or similar:
const Messages ContextKey = iota
...
k := datastore.NewKey(c, "Messages", "en_US", 0, nil)
m := new(Messages)
if err := datastore.Get(c, k, m); err != nil {
...
} else {
context.Set(r, Messages, m)
}
Which is obviously incredibly limited, but at least makes strings available from calling code via context.Get(r, foo.Messages). Can anyone point me at more useful implementations, or suggest a better approach?
Edit (relevant but not completely useful):
gettext: a MO file parser
go-18n
Internationalization plan for Go
Polyglot

Jonathan Chan points out Samuel Stauffer's go-gettext which seems to do the trick. Given the directories:
~appname/
|~app/
| `-app.go
|+github.com/
`-app.yaml
Start with (assumes *nix):
$ cd appname
$ git clone git://github.com/samuel/go-gettext.git github.com/samuel/go-gettext
Source preparation cannot use the _("String to be translated") short form, due to underscore's special characteristics in Go. You can tell xgettext to look for the camelcase function name "GetText" using the -k flag.
Minimal working example:
package app
import (
"fmt"
"log"
"net/http"
"github.com/samuel/go-gettext"
)
func init () {
http.HandleFunc("/", home)
}
func home(w http.ResponseWriter, r *http.Request) {
d, err := gettext.NewDomain("appname", "locale")
if err != nil {
log.Fatal("Failed at NewDomain.")
}
cat := d.GetCatalog("fr_FR")
if cat == gettext.NullCatalog {
log.Fatal("Failed at GetCatalog.")
}
fmt.Fprintf(w, cat.GetText("Yes."))
}
Create the template with:
$ xgettext -d appname -kGetText -s -o appname.pot app/app.go
Note -k, without it there'll be no output as xgettext won't recognise calls to GetText. Edit relevant strings, email etc in appname.pot. Let's assume we're localising for French:
$ mkdir -p locale/fr_FR/LC_MESSAGES
$ msginit -l fr_FR -o french.po -i appname.pot
Edit french.po:
# Appname l10n
# Copyright (C) 2013 Wombat Inc
# This file is distributed under the same license as the appname package.
# Wombat <wombat#example.com>, 2013.
#
msgid ""
msgstr ""
"Project-Id-Version: appname v0.1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2013-01-13 11:03+1300\n"
"PO-Revision-Date: 2013-01-13 11:10+1300\n"
"Last-Translator: Rich <rich#example.com>\n"
"Language-Team: French\n"
"Language: fr\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
#: app/app.go:15
msgid "Yes."
msgstr "Oui."
Generate the binary (the file that'll actually get deployed with the app):
$ msgfmt -c -v -o locale/fr_FR/LC_MESSAGES/appname.mo french.po
Final directory structure:
~appname/
|~app/
| `-app.go
|~github.com/
| `~samuel/
| `~go-gettext/
| +locale/
| |-catalog.go
| |-domain.go
| `-mo.go
|~locale/
| `~fr_FR/
| `LC_MESSAGES/
| `-appname.mo
`-app.yaml
(locale directory under go-gettext holds test data, could be removed for deployment.)
If all goes well, a visit to appname should display "Oui."

go-i18n is an alternative package with some nice features:
Implements CLDR plural rules.
Uses text/template for strings with variables.
Translation files are simple JSON.

GNU Gettext is widely adopted as a de facto standard for i18n solutions.
To use .po files directly from your Go project and load all translations in memory for better performance, you can use my package: https://github.com/leonelquinteros/gotext
It's fairly simple and directly to the point.
So, given a default.po file (formatted after GNU gettext: https://www.gnu.org/software/gettext/manual/html_node/PO-Files.html) located in /path/to/locales/es_ES/default.po you can load it using this package and start consuming the translations right away:
import "github.com/leonelquinteros/gotext"
func main() {
// Configure package
gotext.SetLibrary("/path/to/locales")
gotext.SetLanguage("es_ES")
// Translate text from default domain
println(gotext.Get("Translate this text"))
}
If you prefer to have the translations defined in a string for a more "focused" use, you can parse a PO formatted string with a Po object:
import "github.com/leonelquinteros/gotext"
func main() {
// Set PO content
str := `
msgid "One apple"
msgstr "Una manzana"
msgid "One orange"
msgstr "Una naranja"
msgid "My name is %s"
msgstr "Mi nombre es %s"
`
// Create Po object
po := new(Po)
po.Parse(str)
// Get a translated string
println(po.Get("One orange"))
// Get a translated string using variables inside the translation
name := "Tom"
println(po.Get("My name is %s", name))
}
As you can see on the last example, it's also possible to use variables inside the translation strings.
While most solutions are pretty much similar, including yours, using a common format as gettext can bring some extra benefits.
Also, your solution doesn't seems to be safe for concurrent use (when consumed from several goroutines). This package handles all that for you. There are also unit tests for the package and contributions are welcome.

Related

shake - rule finished running but did not produce file:

I try to use shake to convert some markdonw files to html ("bake"). The markdown files are in a directory "dough" and the html should go to "baked". The goal is to produce the index.html file, which links the other files.
This is my first use of shake!
The conversion works, but at the end the first rule produces the error
`rule finished running but did not produce file:`
The cause is perhaps that the index.html file is produced before (with the second rule). How can I tell the first rule not to expect a result (or force the production again)?
secondary question: how to change the first rule to collect files with extension "md" and "markdown"?
Thank you for the help! Suggestions for improvements are most welcome!
bakedD = "site/baked" -- toFilePath bakedPath
doughD = "site/dough"
shakeWrapped :: IO ()
shakeWrapped = shakeArgs shakeOptions {shakeFiles=bakedD
, shakeVerbosity=Loud
, shakeLint=Just LintBasic
} $
do
want ["index"<.>"html"]
"index"<.>"html" %> \out ->
do
mds <- getDirectoryFiles doughD ["//*.md"]
let htmlFiles = [bakedD </> md -<.> "html" | md <- mds]
need htmlFiles
liftIO $ bakeOneFileIO "baked/index.html"
(bakedD <> "//*.html") %> \out ->
do
let c = dropDirectory1 $ out -<.> "md"
liftIO $ bakeOneFileIO c
The error message notes that you declare the file to produce index.html, but it doesn't produce that file. From a read of your build system, it appears it produces based/index.html? If so, change the want line area to read:
do
want ["baked/index.html"]
"baked/index.html" %> \out ->
Now you are saying at the end of the execution you want to produce a file baked/index.html, and that here is a rule that produces baked/index.html. (If it's really producing site/baked/index.html then adjust appropriately.)
Addressing your second question, mds <- getDirectoryFiles doughD ["//*.md","//*.markdown"] will detect both extensions.
As for style tips, using "index" <.> "html" is not really helping - "index.html" is identical but clearer to read. Other than that, it seems pretty idiomatic.
The issue was that the first rule wants a file, but this file is included (and produced) by the second rule. There is an indication for that problematic case that the \out variable is not used and the production of the index.htm is not required in this rule (as it is included in the second rule). One can take this as an indication that a phony rule would be appropriate and simplify the code:
bakedD = "site/baked" -- toFilePath bakedPath
doughD = "site/dough"
shakeWrapped :: IO ()
shakeWrapped = shakeArgs shakeOptions {shakeFiles=bakedD
, shakeVerbosity=Loud
, shakeLint=Just LintBasic
} $
do
want ["allMarkdownConversion"]
phony "allMarkdownConversion" $
do
mds <- getDirectoryFiles doughD ["//*.md"] -- markdown ext ??
let htmlFiles = [bakedD </> md -<.> "html" | md <- mds]
-- liftIO $ putIOwords ["shakeWrapped - htmlFile", showT htmlFiles]
need htmlFiles
(bakedD <> "//*.html") %> \out ->
do
let c = dropDirectory1 $ out -<.> "md"
liftIO $ bakeOneFileIO c
I think that shake is a very convenient method to add a cache to a static site generator; it rebuilds only what is required!

DBF File with unreadable fields. Are they encrypted, encoded weird, or something else?

I have a program that installs an updated database monthly into a special software we use. I get an .exe, run it, the .exe "installs" a bunch of DBF/CDX files into a folder, and then "hooks up" the database info into our software somehow.
Here is what the "installation" output folder looks like every month:
I've opened the DBF I'm most interested pulling info from (parts.dbf) (with at least 4 different pieces of software I believe) and browsed the data. Most of the fields look fine, readable, all is good. However, the 2 fields that I NEED (Prices and Part Numbers) are unreadable. In the Parts column all of the fields show 10 or 12 characters followed by a bunch of 9's (examples:<\MFMIFJHMFll999999999999999999, KI9e^Z]pbk^999999999999999999, JIFIPKMFL999999999999999999999). In the Price column its similar, just not as many characters (examples: LJKLGIQ999, IGII999999, JMQJGLL999).
Here is a screenshot of what I'm seeing exactly:
I have googled just about everything I know to google. I've downloaded different programs, tried to pull the data into Crystal Reports, tried to encode it differently (not sure I did that right, though), tried to figure out how to decrypt it (that journey was short-lived because it was so over my head), and just generally been pulling my hair out over this for weeks. I don't know what to do because I don't even really know where to begin. I'm just stabbing in the dark.
I THINK this file was created in some version of FoxPro but I could be wrong. When I view the information in our software it all shows up fine. Part Numbers and Prices look like readable human characters.
Example of data in our software:
I'm out of ideas. I need to know what I'm working with so I can work on figuring out how to "fix it". Is this a FoxPro file? Is it encoded in a way that I need to change? Is it encrypted data in those two fields? Am I way off on everything?
Ideally, I'd love to pull this data into Crystal Reports and do my reporting thing with the data. Even Excel could probably work okay. As it stands though I can't do much reporting with a bunch of weird characters and 9's.
Any help with this would be greatly appreciated.
Screenshot of Schema, per comment section:
Yes, 0x03 in header's first byte it is a Foxbase table. As cHao already pointed out, the author decided to create those columns with some byte shifting of each character (I wouldn't call that encryption though, too easy to solve for any programmer - or non-programmer with some pattern discovery).
Now the question is how you can utilize that data without damaging the original. One idea is to take a copy, alter the data in it and use that copy instead. Doing that with some computer language is easy when you are a programmer, but you are saying you are not. Then comes the question, which language code you could simply get and compile on your computer.
Well I wanted to play with this as a skill testing for myself and came up with some C# code. It was quite easy to write, and compile on any windows machine (so I thought, I had been doing that since years ago). I was mistaken, I don't know why nor have a will to investigate, but the executable created using command line compiler (any windows have it already) is blocked by my antivirus! I signed it but nothing changed. I gave up very quickly.
Luckily there was another choice which I think is better anyways. Go < g > write and compile with Go - the fantastic language from Google. If you want to spare your 10-15 mins at most to it, I will give you the code and how to compile it into an exe on your computer. First here is the code itself:
package main
import (
"fmt"
"log"
"path/filepath"
"strings"
"os"
"io"
"time"
"github.com/jmoiron/sqlx"
_ "github.com/mattn/go-adodb"
)
func main() {
if len(os.Args) != 2 {
log.Fatal("You need to supply an input filename.")
}
source := os.Args[1]
if _, err := os.Stat(source); os.IsNotExist(err) {
log.Fatal(fmt.Sprintf("File [%s] doesn't exist.", source))
}
log.Println(fmt.Sprintf("Converting [%s]...", source))
saveAs := GetSaveAsName(source)
log.Println(fmt.Sprintf("Started conversion on copy [%s]", saveAs))
ConvertData(saveAs)
log.Println("Conversion complete.")
}
func ConvertData(filename string) {
srcBytes := make([]byte, 127-32-1)
dstBytes := make([]byte, 127-32-1)
for i := 32; i < 34;i++ {
srcBytes[i-32]=byte(i+25)
dstBytes[i-32]=byte(i)
}
for i := 35; i < 127; i++ {
srcBytes[i-33] = byte(i+25)
dstBytes[i-33] = byte(i)
}
src := string(srcBytes) + string(byte('"')+25)
dst := string(dstBytes)
dbPath, dbName := filepath.Split(filename)
db, err := sqlx.Open("adodb", `Provider=VFPOLEDB;Data Source=` + dbPath)
e(err)
defer db.Close()
stmt := fmt.Sprintf(`update ('%s') set
p_part_num = chrtran(p_part_num, "%s", "%s"+'"'),
p_price = chrtran(p_price, "%s", "%s"+'"')`,
dbName, src, dst, src, dst)
_, err = db.Exec(stmt)
e(err)
}
func GetSaveAsName(source string) string {
fp, err := filepath.Abs(source)
e(err)
dir, fn := filepath.Split(fp)
targetFileName := filepath.Join(dir,
fmt.Sprintf("%s_copy%d.dbf",
strings.Replace(strings.ToLower(fn), ".dbf", "", 1),
time.Now().Unix()))
e(err)
in, err := os.Open(source)
e(err)
defer in.Close()
out, err := os.Create(targetFileName)
e(err)
defer out.Close()
_, err = io.Copy(out, in)
e(err)
err = out.Close()
e(err)
return targetFileName
}
func e(err error) {
if err != nil {
log.Fatal(err)
}
}
And here are the steps to create an executable out of it (and have Go as a language on your computer for other needs:)
Download Go language from Google and install it. Its installer is simple to use and finish in a few seconds.
Open a command prompt. Type:
Go version [enter]
-You should see installed Go's version (as of now 1.10).
-Type
Go env [enter]
and check GOPATH , it points the base folder for your go projects. Go to that folder and create 4 folders named:
bin, pkg, src and vendor
By default GOPATH is "Go" under your home folder, looks like this:
c:\users\myUserName\Go
after creating folders you would have:
c:\users\myUserName\Go
c:\users\myUserName\Go\bin
c:\users\myUserName\Go\pkg
c:\users\myUserName\Go\src
c:\users\myUserName\Go\vendor
using any text editor (Notepad.exe for example) copy & paste and save the code as say "MyCustomConverter.go" into src folder.
Code has 2 external libraries that you need to get. Change directory to your GOPATH (not really necessary but my habit at least) and get those libraries typing:
cd %GOPATH%
go get -v github.com/jmoiron/sqlx
go get -v github.com/mattn/go-adodb
You are ready to compile your code.
cd src
set GOARCH=386
go build MyCustomConverter.go
This would create MyCustomConverter.exe that you can use for conversion.
set GOARCH=386 is needed in this special case, because VFP OLEDB driver is 32 bits driver.
Oh I forgot to tell, it uses VFPOLEDB driver, which you can download from here and install.
You would use the executable like this:
MyCustomConverter.exe "c:\My Folder\parts.dbf"
and it would create a modified version of that named as:
"c:\My Folder\parts_copyXXXXXXXXXX.dbf"
where XXXXXXXXXXX would be a timestamp value (so whenever you run you create another copy, it doesn't overwrite on to one that may exist).
Instead of going to command prompt everytime and typing the fullpath of your parts table, you could copy the MyCustomConverter.exe file on to desktop and drag & drop your parts.dbf on to that exe from windows explorer.
(It was a nice exercise for my Go coding - there would be critics such as why I didn't use parameters but I really had good reasons, driver and the Go library support namely:)
I THINK this file was created in some version of FoxPro
While the DBF Data Tables were CREATED by Foxpro, they are POPULATED by an APPLICATION which may or may not have been written in Foxpro.
And yes, you do not need to worry about the CDX files unless you want to organize (sequence) the data by one of its Indexes or to establish Relationships between multiple Data Tables for processing purposes. However unless you were to do that using Foxpro/Visual Foxpro itself, it wouldn't be of use to you anyway.
From the comments that you have already received, it looks as though the developers of the APPLICATION that writes the field values into the DBF Data Tables might have encrypted the data. And it also seems like you may have found how to decrypt it using the suggestions above.
I'm no programmer unfortunately
If that is the case then I'd suggest that you STOP RIGHT NOW before you introduce more problems than you want. Blindly 'mucking' around with the data might just make things worse.
If this project is BUSINESS CRITICAL then you should hire a software consultant familiar with Foxpro/Visual Foxpro to get the work done - after which you can do whatever you want. Remember that if something is BUSINESS CRITICAL then it is worth spending the $$$$
Good Luck

How to run a .pl test on C project in Eclipse on Windows?

I have to make a C (not C++) project to the specifications given by my teacher.
To allow us to test this project he has given us a .pl file that should test the project and a folder full of .in and .out files.
I work on a Win10 machine and has Eclipse for C installed (Kepler).
How can I set up my project to run the provided test?
Do I need to change anything in the test since I don't work on Linux and not from a cmdl?
The program is a train travel planner.
Here is the .pl file:
#!/usr/bin/env perl
use warnings;
use strict;
my ( $createmode ) = #ARGV;
my $testdir = "./tests";
if( defined($createmode) ) {
if($createmode cmp "create") {
print "Brug:\n";
print "\tcheck.pl - tests if your programs output matches .out-filerne\n";
print "\tcheck.pl create - makes new .out-files (deletes the excisting files!)\n";
exit();
}
$createmode=1;
}
# print "$testdir/tests.tst";
open(TESTS, "$testdir/tests.tst");
my $koereplan;
my $startst;
my $slutst;
while (<TESTS>) {
/([\w\d]+)\.in\s+\"(.+)\"\s+\"(.+)\"/ && do {
$koereplan="$testdir/$1";
$startst=$2;
$slutst=$3;
# print $koereplan."\t".$startst."\t".$slutst."\n";
open(RUN, "./travelplanning $koereplan.in '$startst' '$slutst' |");
my $cost=0;
while(<RUN>) {
/^(\d+)\s+(\d+)$/ && do {
$cost=$1+$2*15;
}
};
#print "Cost fra programmet: $cost";
my $outfile="$koereplan.$startst.$slutst.out";
# print $outfile."\n";
if($createmode) {
open(OUT, ">$outfile") or die "Couldn't open '$outfile' for writing";
} else {
open(IN, "<$outfile") or die "Couldn't open '$outfile' for reading";
}
if($createmode) {
print OUT "$cost\n";
} else {
my $facit=<IN>;
if($facit cmp "$cost\n") {
chomp $facit;
print "ERROR: $koereplan.in $startst $slutst gav $cost og facit er $facit.\n";
#last;
} else {
chomp $facit;
print "SUCCES: $koereplan.in $startst $slutst gav $cost og facit er $facit.\n";
};
};
};
}
Some names are in Danish, sorry about that. Koereplan = timetable, slut = end.
Excample of the .in files:
Esbjerg, 07:48
Bramming, 08:00
Vejen, 08:15
Kolding, 08:30
Middelfart, 08:45
Odense, 09:14
Nyborg, 09:29
Korsør, 09:42
Slagelse, 09:53
Sorø, 10:01
Ringsted, 10:10
Roskilde, 10:26
Høje Taastrup, 10:34
Valby, 10:42
København H, 10:48
This is just station names and departure times.
The .out files just contain one number each, the number of minutes the corresponding trip will take.
The scaffold project also came with makefile files, but I haven't been able to use them in my environment, I have simply taken the "business-files" to another project made in Eclipse, and that works fine for compiling and running the project in Eclipse. But that doesn't allow me to use the test script (that I currently can't even open in Eclipse).
If you feel it helps, here is the assignment: assignment on course website
But I think I can solve the assignment itself, it's using the teachers test I'm unsure about how to do.
To start Eclipse CDT choose 1 of these methods:
Start eclipse from the terminal that works, e.g.:
$ /path/to/eclipse.exe &
Make sure msys and mingw's bin directories are in the PATH and start eclipse the "normal" way
Then you can import your project as a new C Project and build/debug/run within CDT as normal:
Choose File menu | New | Makefile Project with Existing Code
Enter path to your project and name. But leave indexer settings as <none>* and press Finish
Open the Make Target view
Right-click on the project and choose New...
Fill in the target you want to build
Double-click on the new green icon and the build will run with the output in the Console view.
Something seems strange in CDT, if I use the obvious setting of MinGW GCC for indexer settings, then I can't do make properly as CDT is insisting on using internal builder.

VIM with youcomplete me

I've been coding for a while and moving from IDE to IDE is a real pain when what I need most is just ctag and code completion. I finally decided to go to vim, well neovim in this case but that's beside the point.
I have ctags working and was following this tutorial from the YCM site along with this one
but I can't seem to get YCM to work for me.
I followed these step:
cd ~/.nvim/bundle
git clone https://github.com/Valloric/YouCompleteMe.git
cd YouCompleteMe
git submodule update --init --recursive
./install.sh --clang-completer --system-libclang
this went through downloaded, compiled, installed some thing
Then I created the .ycm_extra_conf.py file at "~/.nvim/.ycm_extra_conf.py"
and added: let g:ycm_global_ycm_extra_conf = "~/.nvim/.ycm_extra_conf.py"
to the top of my ~/.nvimrc file
This is what I added to my .ycm_extra_conf.py file
# This file is NOT licensed under the GPLv3, which is the license for the rest
# of YouCompleteMe.
#
# Here's the license text for this file:
#
# This is free and unencumbered software released into the public domain.
#
# Anyone is free to copy, modify, publish, use, compile, sell, or
# distribute this software, either in source code form or as a compiled
# binary, for any purpose, commercial or non-commercial, and by any
# means.
#
# In jurisdictions that recognize copyright laws, the author or authors
# of this software dedicate any and all copyright interest in the
# software to the public domain. We make this dedication for the benefit
# of the public at large and to the detriment of our heirs and
# successors. We intend this dedication to be an overt act of
# relinquishment in perpetuity of all present and future rights to this
# software under copyright law.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
# ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
# OTHER DEALINGS IN THE SOFTWARE.
#
# For more information, please refer to <http://unlicense.org/>
import os
import ycm_core
# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
flags = [
'-Wall',
'-Wextra',
'-Werror',
'-Wno-long-long',
'-Wno-variadic-macros',
'-fexceptions',
'-DNDEBUG',
'-ftrapv',
'-finstrument’-functions',
'-Wfloat’-equal',
'-Wundef',
'-Wshadow',
'-Wpointer-arith',
'-Wcast’-align',
'-Wstrict-prototypes',
'-Wstrict-overflow=5',
'-Wwrite-strings',
'-Waggregate-return',
'-Wcast-qual',
'-Wswitch-default',
'-Wswitch-enum',
'-Wconversion',
'-Wunreachable-code',
# You 100% do NOT need -DUSE_CLANG_COMPLETER in your flags; only the YCM
# source code needs it.
'-DUSE_CLANG_COMPLETER',
# THIS IS IMPORTANT! Without a "-std=<something>" flag, clang won't know which
# language to use when compiling headers. So it will guess. Badly. So C++
# headers will be compiled as C headers. You don\'t want that so ALWAYS specify
# a "-std=<something>".
# For a C project, you would set this to something like 'c99' instead of
# 'c99'.
'-std=c+99',
# ...and the same thing goes for the magic -x option which specifies the
# language that the files to be compiled are written in. This is mostly
# relevant for c headers.
'-x',
'c',
'-isystem',
'../BoostParts',
'-isystem',
# This path will only work on OS X, but extra paths that don't exist are not harmful
'/System/Library/Frameworks/Python.framework/Headers',
'-isystem',
'../llvm/include',
'-isystem',
'../llvm/tools/clang/include',
'-I',
'.',
'-I',
'./ClangCompleter',
'-isystem',
'./tests/gmock/gtest',
'-isystem',
'./tests/gmock/gtest/include',
'-isystem',
'./tests/gmock',
'-isystem',
'./tests/gmock/include',
]
# Set this to the absolute path to the folder (NOT the file!) containing the
# compile_commands.json file to use that instead of 'flags'. See here for
# more details: http://clang.llvm.org/docs/JSONCompilationDatabase.html
#
# You can get CMake to generate this file for you by adding:
# set( CMAKE_EXPORT_COMPILE_COMMANDS 1 )
# to your CMakeLists.txt file.
#
# Most projects will NOT need to set this to anything; you can just change the
# 'flags' list of compilation flags. Notice that YCM itself uses that approach.
compilation_database_folder = ''
if os.path.exists( compilation_database_folder ):
database = ycm_core.CompilationDatabase( compilation_database_folder )
else:
database = None
SOURCE_EXTENSIONS = [ '.cpp', '.cxx', '.cc', '.c', '.m', '.mm' ]
def DirectoryOfThisScript():
return os.path.dirname( os.path.abspath( __file__ ) )
def MakeRelativePathsInFlagsAbsolute( flags, working_directory ):
if not working_directory:
return list( flags )
new_flags = []
make_next_absolute = False
path_flags = [ '-isystem', '-I', '-iquote', '--sysroot=' ]
for flag in flags:
new_flag = flag
if make_next_absolute:
make_next_absolute = False
if not flag.startswith( '/' ):
new_flag = os.path.join( working_directory, flag )
for path_flag in path_flags:
if flag == path_flag:
make_next_absolute = True
break
if flag.startswith( path_flag ):
path = flag[ len( path_flag ): ]
new_flag = path_flag + os.path.join( working_directory, path )
break
if new_flag:
new_flags.append( new_flag )
return new_flags
def IsHeaderFile( filename ):
extension = os.path.splitext( filename )[ 1 ]
return extension in [ '.h', '.hxx', '.hpp', '.hh' ]
def GetCompilationInfoForFile( filename ):
# The compilation_commands.json file generated by CMake does not have entries
# for header files. So we do our best by asking the db for flags for a
# corresponding source file, if any. If one exists, the flags for that file
# should be good enough.
if IsHeaderFile( filename ):
basename = os.path.splitext( filename )[ 0 ]
for extension in SOURCE_EXTENSIONS:
replacement_file = basename + extension
if os.path.exists( replacement_file ):
compilation_info = database.GetCompilationInfoForFile(
replacement_file )
if compilation_info.compiler_flags_:
return compilation_info
return None
return database.GetCompilationInfoForFile( filename )
def FlagsForFile( filename, **kwargs ):
if database:
# Bear in mind that compilation_info.compiler_flags_ does NOT return a
# python list, but a "list-like" StringVec object
compilation_info = GetCompilationInfoForFile( filename )
if not compilation_info:
return None
final_flags = MakeRelativePathsInFlagsAbsolute(
compilation_info.compiler_flags_,
compilation_info.compiler_working_dir_ )
else:
relative_to = DirectoryOfThisScript()
final_flags = MakeRelativePathsInFlagsAbsolute( flags, relative_to )
return {
'flags': final_flags,
'do_cache': True
}
I have a project where I created ctags and I'm able to get around the file in vim using it's ctags support but, code completion just isn't working.
I went through the steps, went inside my ~/.nvim/bundle
This error was caused by typos in my .ycm_extras file
I run :YcmDebugInfo
and I get this error that my server has crashed:
Printing YouCompleteMe debug information...
-- Server crashed, no debug info from server
-- Server running at: http://127.0.0.1:64594
-- Server process ID: 47040
-- Server logfiles:
-- /var/folders/64/d3t4_pcs06943d651dfgp3m00000gn/T/ycm_temp/server_64594_stdout.log
-- /var/folders/64/d3t4_pcs06943d651dfgp3m00000gn/T/ycm_temp/server_64594_stderr.log
reading the stderr log I saw that I had some of these "\xe2" characters in my .ycm_extra_conf.py file.
Running a python script from this answer, I found the culprits.
I first changed to the proper directory where the .ycm_extra file was located then ran python:
Brother:.vim blubee$ python
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open(".ycm_extra_conf.py") as fp:
... for i, line in enumerate(fp):
... if "\xe2" in line:
... print i, repr(line)
...
45 "'-finstrument\xe2\x80\x99-functions',\n"
46 "'-Wfloat\xe2\x80\x99-equal',\n"
50 "'-Wcast\xe2\x80\x99-align',\n"
removing the quote marks fixed the problem, now YCM works just as expected.
you can see the file before:
'-finstrument’-functions',
'-Wfloat’-equal',
'-Wundef',
'-Wshadow',
'-Wpointer-arith',
'-Wcast’-align',
and after:
'-finstrument-functions', #removed quote after finstrument
'-Wfloat-equal', #removed quote after Wfloat
'-Wundef',
'-Wshadow',
'-Wpointer-arith',
'-Wcast-align', #removed quote after Wcast

SBT - Invoke GCC call failing in SBT, but not when I manually execute it

I'm working on a library that is going to talk to the I2C bus on my Raspberry PI from Scala. For this I need a little bit of JNI code that is going to interrupt with the OS on the device.
I tried to make a build file for this, which right now, looks like this:
name := "core"
organization := "nl.fizzylogic.reactivepi"
scalaVersion := "2.11.6"
val nativeClasses = List(
"nl.fizzylogic.reactivepi.i2c.I2CDevice"
)
val nativeDeviceSources = List(
"src/jni/nl_fizzylogic_reativepi_i2c_I2CDevice.c"
)
val nativeGenerateHeaders = taskKey[Int]("Generates JNI headers for the library")
val nativeCompile = taskKey[Int]("Compiles the native library")
nativeGenerateHeaders := {
("javah -classpath target/scala-2.11/classes -d src/jni " + nativeClasses.mkString(" ")) !
}
nativeCompile := {
("gcc -Wl -add-stdcall-alias -I$JAVA_HOME/include -I$JAVA_HOME/include/darwin -I$JAVA_HOME/include/linux -shared -o target/reactivepi.so " + nativeDeviceSources.mkString(" ")) !
}
So far the javah call is successful. But when I invoke the nativeCompile task, GCC tells me it can't find the jni.h file. However when I copy and paste the command from the build file to my terminal and execute it, it succeeds.
It looks like it is not picking up the include paths when I am executing gcc from my custom build task. But I have no idea what I'm doing wrong here.

Resources