I've been using VSCode for C language professionally almost everyday for +1 year. Right now, I hit with something that is really affecting my productivity.
When I open a big project, the features "Go to definition, Go to declaration, Peek..." etc don't work. I don't know how to describe how 'big' the project is. There are source files with +26k lines and it can take up to 45 min to compile. When I work with a more reasonably sized project, I have no issues, so until now I assumed this was a limitation of the program due to the size of my project and resigned myself. Now, I'm really bothered at this point and would like to find a solution.
What strikes me is that searching in the whole project (Ctrl + Shift + F) is blazing fast and works brilliantly, so VS seems to be capable of 'handling' this big project.
C/C++ extension from Microsoft last version v0.28.3
VSCode last version 1.46.1
Windows 10
Do you think there is a solution for this? Have you used VSCode with massive projects?
Edit: by 'don't work' I mean, it tries to perform the action but stays 'thinking' indefinitely.
Most propably it is not "Not working" but just "pretty slow". This is a known problem for C/C++ projects using the C/C++ extension for Visual studio code. The Indexer for intellisense needs some time (especially if you are not limiting it via limitSymbolsToIncludedHeaders or something like that). You could try reducing the amount of parsed files by using explizit browse paths in your c_cpp_properties.json like
"browse": {
"path": [
"/usr/include/",
"/usr/local/include/",
"${workspaceRoot}/../include",
"${workspaceRoot}/dir1",
"${workspaceRoot}/dir2",
"${workspaceRoot}/dir3/src/c++",
"${workspaceRoot}/dir5",
"${workspaceRoot}/dir6/src",
"${workspaceRoot}/dir7/src",
"${workspaceRoot}/dir4"
],
and excluding for example IDE/SDK files where you do not need autocompletion/Go To Symbol/Go to definition.
For more explanation see: https://github.com/microsoft/vscode-cpptools/issues/1695
Related
When running tools such as formatters and linting tools with "auto-correction" options, it can be that the input and output for a Rule are the same file; for example:
"//*.hs" %> \out ->
cmd_ "ormolu" "-m" "inplace" out
-- OR
batch 10 ("//*.hs" %>)
( \out -> do
cmd_ "ormolu" "-m" "inplace" out
pure out
)
(cmd_ "hlint")
This seems to work "correctly" (the rule is re-run if the source file is needed and has changed), but we're unsure if this is a happy coincidence or shake working as designed - especially when we start thinking about cached results from shakeShare or in the future Cloud Shake. Is this the best way to handle this type of rule, or is there something better?
There is no principled way to generate a rule that replaces a source file in Shake. Given a source code formatter, anything else isn't very usfeul. Shake makes the assumption that inputs don't change while the compilation is ongoing. It's likely that passing --lint will lead to a lint error and that it would be incompatible with Cloud Shake. The official advice would be to make such changes in a separate non-Shake pass before you call shake.
However, if it works for you, and is useful, I wouldn't overly worry. The pattern has tests in Shake, it's something plenty of people do. You can turn off Cloud caching on a per file basis with historyDisable.
On OS X, I generated a set of ctags for the system includes using the following command:
ctags -f c -h ".h" -R --c-kinds=+p --fields=+iaS --extra=+q /usr/include
This was run inside of a ~/.vim/ctags/ directory, where I put all of the ctags I generate for system-wide header files (I also have stuff for ROS and CPP that I load conditionally, but that's neither here nor there).
Anyway. The ctags file is set correctly in my .vimrc, and vim can definitely see the ctags, but for some reason the autocomplete popup will only display results from #included header files if I write out the entire symbol and then start backspacing. As an example, if I #include <string.h> in a project, and then I want to call strlen(), and I start to type str in to the active vim buffer, I will only get results for symbols that are currently in the vim buffer. But, if I type out strlen and then start backspacing one or two characters and hit <C-n>, the popup menu will be populated with matches from any other included header files.
EDIT: Turns out, if I just hit "s" then <C-n>, it works as well. So the problem seems to be that it only works if the popup menu is launched manually. Which makes me think that it's a plugin problem (see below)
Additional information:
completeopt is set to completeopt=menuone,menu,preview,longest
I have OmniCppComplete, which I suppose could be interfering with the behavior. It is currently not being conditionally loaded for C++ files only. If you want me to edit and post my OmniCppComplete settings from my .vimrc, just ask.
I also have AutoComplPop installed, but I haven't done anything to configure it, so it's running with its default settings. Haven't really researched the plugin, so no idea if some of it's behavior could be interfering with the results.
I have AutoTag and TagBar installed, but those should only be fiddling with the current directory's local tagfile.
I'm honestly pretty new to Vim, and I just have no idea where to start debugging this issue, whether it be with a random plugin or with my .vimrc settings.
Vim has many specific completion mechanisms.
<C-n> and <C-p> use many sources defined by the complete option. By default, they will provide completion using the current and all loaded and unloaded buffers, tags and included files. While you can usually get quite useful suggestions with these, it is a bit of a "catch-all" solution: it is not reliable at all if you work on reasonably large projects.
<C-x><C-]> uses only tags so it may be a little more useful to you.
And there are many more, see :h ins-completion.
Omni completion is smarter: it typically runs a custom filetype-specific script that tries hard to provide meaningful completion. It is triggered by <C-x><C-o> and you can read about it in :h ft-c-omni. Omni completion is often a better choice when working with code.
Because you have two overlaping "auto"-completion plugins it's hard to say what completion mechanism is at work. You should disable those plugins and play around with the different completion mechanisms available to you.
I have not mastered this yet, but I do think the following observation may be of help.
Vim's default auto complete which can be quite noisy, often gets in the way of what you call with <C-x><C-o>. Specifically, I found myself calling up my tags based completions with <C-x><C-o> only to have them replaced with continued typing with Vim's default suggestions using my open buffers.
The suggestion of shutting off one of the plugins makes sense. In my case the key was how to shut down Vim's default behavior. I have seen several people (and to which I now include myself), set the length of the expression to a high number before triggering Vim's default. For me that is:
let g:deoplete#auto_complete_start_length = 99
... this way you eliminate the default layer of completions that comes and goes regardless of the commands you intended to inform your work.
This still feels like a hack but it helps keep my work focused on the tag-based completions.
FYI: I use NVIM on a Mac.
I followed the tutorial on http://silversprite.codeplex.com/ and got rid of a few issues that were expected (the colors etc). But there is 1 compile error left:
Error 2 The type 'Microsoft.Xna.Framework.Graphics.VertexDeclaration' exists in both 'c:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\Silverlight\v5.0\Microsoft.Xna.Framework.Graphics.dll' and 'C:\Users\Brandon\Desktop\SilverSprite.dll' C:\Users\Brandon\Desktop\Projects\Other projects\Game Jam prac\Silverlight3dApp1\Silverlight3dApp1\Silverlight3dApp\VertexPositionColorNormal.cs 31
I've been searching for over an hour and can't find a solution.
The error means exactly what it says: There is a type, VertexDeclaration, that is being defined by both Silverlight and by SilverSprite. When your project tries to resolve which one to use - it can't decide.
SilverSprite is, and has always been, kind of buggy. This appears to be a bug in SilverSprite. It's coming from this file, which contains a declaration of VertexDeclaration which is nothing like the actual API.
Fortunately you don't have to implement it yourself - because Silverlight provides it. I suggest you download the SilverSprite source and include it as a project in your solution, and reference that instead of the DLL (ie: build SilverSprite from source yourself). Then you can easily modify it and simply delete the bogus type definition - your code will then automatically use the real one.
If you come across any other bugs, I suggest you look at ExEn. I made it last time I tried to use SilverSprite (although this was before Silverlight 5) - and I fixed many, many bugs. You might find it useful to salvage code from.
I am writing my MSc with LaTeX and I have the problem that sometimes my words are divided in a wrong way.
My language is spanish and I'm using babel package.
How could I solve it?
For example: propuestos appears prop-uestos (uestos in next line). It should be pro-puestos.
Thanks!!
If you only have a small number of hyphenation errors to correct, you can use \hyphenation to fix them. For instance: \hyphenation{pro-puestos}. This command goes after \documentclass and before \begin{document}.
You can put more than one dash in, if you want to give TeX more line-breaking options: \hyphenation{tele-mun-dos}. You can list many words inside the braces; put spaces between them.
If more than a handful of words are wrong, though, TeX is probably using hyphenation patterns for the wrong language -- and if "propuestos" were an English word, it would be hyphenated after "prop", so that's another point in favor of that theory. Do you get a message like this when you run LaTeX?
Package babel Warning: No hyphenation patterns were loaded for
(babel) the language `Spanish'
(babel) I will use the patterns loaded for \language=0 instead.
If so you need to reconfigure your TeX installation with Spanish hyphenation turned on. There should be instructions for that in the manuals that came with the installation. Unfortunately, this is one of the places where TeX's age shows through -- you can't just load a package with the proper hyphenation rules (or Babel would do that); you have to do it when compiling the "base format" with INITEX, which is a maintenance operation. Modern TeX installations have nice utilities for that but they're all different and I don't know which one you're using.
I have a huge project, whole written in C language and I have a single make file that is used to compile it. The project C files contains lots of capitalize problems in it's header files, meaning there are tones of header files that were miss-spelled in lots of C files.
The problem is I need to migrate this project to compile on Linux machine and since Linux is case sensitive I got tones of errors.
Is there an elegant way which I can run make file in Linux and tell him to ignore case sensitive?
Any other solution will be welcome as well.
Thanks a lot.
Motti.
You'll have to fix everything by hand and rename every file or fix every place with #include. Even if you have a huge project (comparable with linux kernel), it should be possible to do this during a hour or two. Automation may be possible, but manual way should be better - because script won't be able to guess which name is right - filename, or the name used in #include.
Besides, this situation is a fault of original project developer. If he/she wasn't sloppy and named every header in every #include correctly, this wouldn't happen. Technically, this is a code problem similar to syntax error. The only right way to deal with it is to fix it.
I think it takes not too long to write a small script, which goes thru the directories first, then replaces C headers. Explained:
Scan the headers' folder and collect filenames.
Make a lowercase list of them. You have now original and locased pairs.
Scan the C source files and find each line contains "#include"
Lowercase it.
Find the lowercase filename in the list collected and lowercased from headers.
Replace the source line with the one collected from headers.
You should put the modified files into a separate folder structure, avoid overwriting the whole source with some buggy stuff. Don't forget to create target folders during the source tree scan.
I recommend a script language for that task, I prefer PHP, but just it's the only server-side script language which I know. Yep, it will run for a while, but only once.
(I bet that you will have other difficulties with that project, this problem is not a typical indicator of high quality work.)
Well I can only tell you that you need to change the case of those header files. I don't know that there is any way you can make it automatic but still you can use cscope to do it in a easier way.
http://www.linux-tutorial.info/modules.php?name=ManPage&sec=1&manpage=cscope
You can mount the files on a case-insensitive file system. FAT comes to mind. ntfs-3g does not appear to support this.
I use the find all and replace all functionality of Source Insight when i have to do complete replacement. But your problem seems quite big, but you can try the option to replace every header file name in all occurences of source files using the
"Find All" + "Replace" functionality. You can use notepad++ too for doing the same.
A long time ago there was a great tool under MPW (Macintosh Programmer's Workshop) called Canon. It was used to canonize text files, i.e. make all symbols found in a given refernce list have have the same usage of upper/lower case. This tool would be ideal for a task like this - I wonder if anything similar exists under Linux ?