I'm having an intermittent problem that I'm trying to track down. Every now and then a significant portion of my src directory is being erased (like 90%+ of all files). I'll be working on my project and all of a sudden I'll get an error, look at git status and it will show nearly all of the files in my repo have been deleted. Then I have to run a bunch of git checkout -- commands and I'm lucky if I don't lose a bunch of work.
Can I use inotify or another program to watch my src directory and report which program is deleting the files? I have a feeling it's gulp but I have no evidence beyond the anecdotal, and I don't want to bother a specific project until I've nailed down the source of the problem.
OS X, by the way.
The first thing that comes to mind is to use lsof to monitor your directory and capture your output to a file (or have a terminal up.)
I tested lsof +D ~/Downloads/ -r 2 out on my OS X, and it seems to work fine.
https://unix.stackexchange.com/questions/157064/monitoring-files-continuously-with-lsof
Auditing. This is one thing that auditing is designed for.
Don't roll your own. Don't use tools designed for other purposes. Use the auditing facilities your operating system provides.
Basic tutorial for OS X is here:
OpenBSM auditing on Mac OS X
Way back in 10.3.x, Apple submitted Mac OS X and Mac OS X Server to
the National Information Assurance Partnership for Common Criteria
certification. Common Criteria certification means that the the
covered hardware and software has been tested and evaluated to make
sure that it meets an established set of requirements for security and
data protection. 10.3.6 and 10.3.6 Server were tested and were found
to meet Evaluation Assurance Level 3 (EAL3) for Common Criteria
certification.
As part of that certification effort, a new piece of software appeared
from Apple: the Common Criteria Tools audit software. This software
was OpenBSM, which is an open source implementation of Sun’s Basic
Security Module (BSM) security audit API and file format. ...
Yes, it's a pain to do properly. But it will work, and the results will be definitive.
Related
I have old Progress database. and I want to read an old Progress .db file.
How can I do it?
Progress version year is 1994. 16 bit.
It was running Novell netware but now it is finish.
Please help.
You need to have the Progress executables that go with whatever release of Progress you have that is also on a compatible platform (DOS or Novell would work). I believe that the last Novell release was Progress version 6. (The current release of Progress is OpenEdge 11.)
If you have the executables still on a PC associated with that system then you should be able to access the db. However, if the database is associated with an application and the application was distributed without source code you probably only have a runtime license. Which probably means that you cannot easily extract the data.
If you have access to more recent copies of Progress you could also try converting and updating the db to something more up to date. But that will also be non-trivial as there have been significant architectural changes in the last 25 years.
The files are binary and the format is proprietary. You cannot just read them in an editor. That software also pre-dates things like ODBC by a decade or so -- so you won't find any relief that way either.
Your best bet is probably to find a consultant who has a fully capable copy of v6 who is willing to do the work.
I'm currently working an a rather large web project which is written using C servlets ( utilizing GWAN Web server ). In the past I've used a couple of IDEs for my LAMP/PHP jobs, like Eclipse.
My problems with Eclipse are that you can either mirror the project locally, which isn't possible in this case as I'm working on a Mac (server does not run on OSX), or use the "remote" view, which would re-upload files when you save them.
In the later case, the file is only partly written while uploading, which makes this a no-go for a running web server, or the file could become corrupted if the connection was lost during uploading. Also, for changing some character, uploading the whole file seems rather inefficient to me.
So I was thinking:
Wouldn't it be possible to have the IDE open Vim per SSH and mirror my changes there, and then just :w (save) ? Or use some kind of diff-files for changes?
The first one would be preffered, as it has the added advantage of Vim .swp files, which makes it possible that others know when someone is already editing the file.
My current solution is using ssh+vim, but then I lose all the cool features I have with Eclipse and other more advanced IDEs.
Also, regarding X-Forwarding: The reason I don't like it is speed. It feels way slower than just editing locally, and takes up unneeded bandwidth, when all I want to do is basically "text editing".
P.S.: I couldn't find any more appropriate tags for the question, especially no "remote" tag, but if you know any, feel free to add them. Also, if there is another similar question, feel free to point it out - I couldn't find any.
Thank you very much.
If you're concerned about having to transmit the entire file for minor changes, the only solution that comes to my mind is running (either continuously, or on demand) an rsync job that mirrors the remote site to your local system (and back). The rsync protocol just transmits the delta information. According to Are rsync operations atomic at file level?, the change is atomic.
Another possibility: run everything in a virtual machine on your Mac. The server and the IDE/text editor are both on the same virtual machine so you don't have to fear network issues.
Because the source code on the virtual machine is under some kind of VCS the classic code → test → commit process is trivial (at least theoretically).
I've seen several questions that are the opposite of this; "How do I disable virtualization?" That is not my question. I want to force an application to run with virtualization enabled.
I have an application that ran just fine under Windows XP, but, because it writes its configuration to its working directory (a subfolder of "C:\Program Files (x86)"), it does not work completely under Windows 7. If I use task manager to turn on UAC Virtualization, it saves its config just fine, but of course it then can't load that config.
I do not want to set it to run as administrator, as it does not need those privileges. I want to set it to run with UAC Virtualization enabled.
I found a suggestion that I put some magic in the registry at HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags. For completeness I also put it in Wow6432Node, but neither had any effect.
File system is virtualized in certain scenarios, so is your question how to still turn it on when your application does not qualify? It is unlikely possible, MSDN:
Virtualization is not in option in the following scenarios:
Virtualization does not apply to applications that are elevated and run with a full administrative access token.
Virtualization supports only 32-bit applications. Non-elevated 64-bit applications simply receive an access denied message when they
attempt to acquire a handle (a unique identifier) to a Windows object.
Native Windows 64-bit applications are required to be compatible with
UAC and to write data into the correct locations.
Virtualization is disabled for an application if the application includes an application manifest with a requested execution level
attribute.
this may come way too late now, but I am the author of the suggestion you found to activate UAC virtualization, and there was a mistake in my post. The registry keys to modify are the following:
HKLM\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers\
HKCU\Software\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers\
(notice the "Layers" appended)
so a full example would be:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Program Files (x86)\\Some Company\\someprogram.exe"="RUNASINVOKER"
note that multiple parameters must be separated with space character.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers]
"C:\\Program Files (x86)\\Some Company\\someprogram.exe"="WINXPSP3 RUNASINVOKER"
--
I'm sincerely sorry that you lost a fair amount of time because of my mistake.
And by the way, let me express my disagreement with Ian Boyd's post. There are places where write privileges should not be granted to everyone, such as this one, since it breaks the base security rule of "System-wide writes should be authorised to privileged principals only". Program Files is a system-wide place, not a per-user one.
All rules have exceptions of course, but in the present case, one could imagine a maliciously crafted configuration file making the program exec an arbitrary command as the user running it. On a lighter side, one could imagine a "mistake delete" by another user, which would make the app fail. Back on the heavier side, application executables in Program Files are often run by the admin, sooner or later. Even if you don't want to, uninstalling programs very often run uninstall executables that are in Program Files. Maybe the uninstall procedure will use that config file which could have consequences if it's maliciously crafted.
Of course you may say, this sounds paranoid somehow, agreed. I did modify some NTFS ACLs in Program Files at the times of Win XP and was able to sleep after that, but why take the slightest risk when the tools are available ?
I found one not very well cited condition where UAC Virtualization does NOT work: when the file in Program Files is maked as read-only.
That is, suppose the file C:\Program Files\<whatever>\config.ini is marked as read-only. When the application try to change it, UAC Virtualization will return an access denied error instead of reparsing it to %LOCALAPPDATA%\VirtualStore\<whatever>\config.ini.
Although I did not found this documented, this behavior is probably done by design, since it makes some sense.
The solution is simple: assure that all files that are supposed to be modified by the application are not read-only (or just unflag all files, since the user will not be able to change them anyway).
You have an application, and you want users to be able to modify registry keys or files in locations that by default only Administrators can modify.
If you were running Windows 2000, or Windows XP, or Windows Vista, or Windows 7, or Windows 8, the solution is the same:
grant appropriate permissions to those locations
For example, if your program needs to modify files in:
C:\Program Files\Blizzard\World of Warcraft
Then the correct action is to change permissions on the World of Warcraft folder. This is, in fact, a shim that Microsoft applied to World of Warcraft. (On next run it granted Everyone Full Control to the folder - how else can WoW update itself no matter what user is logged in.)
If you want users to be able to modify files in a location: you have to grant them permission. If you were a standard user trying to run WoW on Windows XP you will get the same problem - and need to apply the same solution.
Your application is writing its configuration to:
C:\Program Files (x86)\Hyperion Pro\preferences.ini
then you, in fact do want to grant Users Full Control to that file:
So your:
application is not set to run as an Administrator
users cannot modify the executable
users can modify Configuration.ini
Granting permissions is not a bad thing; it's how you administer your server.
There are two solutions:
Install to C:\ProgramData\Contoso\Preferences.ini and ACL it at install time
Install to C:\Program Files\Contoso\Preferences.ini and ACL it at install time
And if you look at the guidance of the AppCompat guy at Microsoft:
Where Should I Write Program Data Instead of Program Files?
A common application code update is this: “my application used to write files to program files. It felt like as good a place to put it as any other. It had my application’s name on it already, and because my users were admins, it worked fine. But now I see that this may not be as great a place to stick things as I once thought, because with UAC even Administrators run with standard user-like privileges most of the time. So, where should I put my files instead?”
FOLDERID_ProgramData
The user would never want to browse here in Explorer, and settings changed here should affect every user on the machine. The default location is %systemdrive%ProgramData, which is a hidden folder, on an installation of Windows Vista. You’ll want to create your directory and set the ACLs you need at install time.
So you have two solutions:
create your file at install time, and ACL it so that all users can modify it at runtime
create your file at install time, and ACL it so that all users can modify it at runtime
The only difference is semantic. The Program Files folder is mean for program files. You don't want to store data here.
And it's not because Diego Queiroz has any insight about security.
It's because it's where just the programs go.
Sometimes machines are imaged with the same Program Files over and over. You don't want per-machine data in your image. That data belongs in ProgramData.
And it's not a security issue.
Some people have to learn where the security boundary is.
there are quite some good points in those other answers.
actually i have upvoted all of those.
so let's all combine them together and add some more aspect ...
the OP mentions some "legacy application from the old days".
so we can assume it is x86 (32bit) and also does not include any manifest (and in particular does not specify any "requestedExecutionLevel").
--
Roman R. has good points in his answer regarding x64 and manifest file:
https://stackoverflow.com/a/8853363/1468842
but all those conditions don't seem to apply in this case.
NovHak outlines some AppCompatFlags with RUNASIVOKER in his answer:
https://stackoverflow.com/a/25903006/1468842
Diego Queiroz adds intersting aspect regarding the read-only flag in his answer:
https://stackoverflow.com/a/42934048/1468842
Ian Boyd states that probably you don't even should go for that "virtualization", but instead set according ACL on those files of interest (such as "config.ini"):
https://stackoverflow.com/a/12940213/1468842
and here comes the addtional / new aspect:
one can set a policy to disable all virtualization - system-wide:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System]
"EnableVirtualization"=dword:00000000
actually i'm enforcing this policy on each and every system that i own.
because otherwise it will lead to confusing behaviour on multi-user environments.
where UserA applies some changes and everything goes fine.
but then UnserB does not get the changes done by UserA.
in case some old crappy software fails then it should "fail"!
and not claim that everything went "fine".
IMHO this "Virtualization" thing was the worst design decision by microsoft, ever.
so maybe the system has this policy enabled and that's why virtualization doesn't work for you?
--
so probably the ultimate checklist would be:
is the application x86 or x64?
does the exe have a manifest (including the requestedExecutionLevel)?
have you checked the read-only attribute (e.g. of those INI files)?
is there a policy to force the EnableVirtualization to 0?
have you tried the AppCompatFlags with RUNASIVOKER?
or simply go for ACL instead of virtualization
--
in the end we are discussing how to get on old legacy application to run.
by using whatever workarounds and hacks we can think of.
this should probably better discussed on either superuser or serverfault.
at stackoverflow (targeted for programmers) we all know: it's about time to get all of our own programs compatible with UAC concept and how to implement things the "right" way - the "microsoft" way :)
I am allowing users to upload photos like photo albums, and also attach files (documents for now) as mail attachments. So i assume I need some anti virus/security tool in place to scan the files first in case people upload infected stuff. So two questions:
1) Are there any 'free' or open source tools for this I can use or integrate into my environment: codeignitor php?
2) How to secure the upload area from rest of the system? Say the virus scanner fails to catch a virus and it is uploaded, how to prevent it from infecting other files? Like can the upload area be sandboxed in or something always and use that filepath for users to access the content so it does not spread to other parts of the system?
There is clamav for a free virus scanner. Install it and you could do something like:
function virus_detected($filename)
{
$clamscan = "/usr/local/bin/clamscan";
$result = exec("$clamscan -i --no-summary $filename");
return strlen($result)?true:false;
}
As for security, make sure the temporary files are uploaded to a directory outside of your web root. You should then verify the file type, rename the file to something other than it's original file name and append the appropriate extension (gif,jpg,bmp,png). I believe this should keep you fairly safe aside from exploits in php itself.
For more information about verifying file types in php check out:
http://www.php.net/manual/en/function.finfo-file.php
I know this topic hasn't been active for three years now, but, in case anyone else in the future, similarly, is looking for a PHP-based anti-virus solution, for those without an anti-virus daemon, program or utility installed on their host machine and without the ability to install an anti-virus daemon, program or utility, phpMussel, a PHP script that I've written based on ClamAV that fits the bill for what Rohit (the the original poster) was looking for (a PHP-based anti-virus to protect their CMS against malicious file uploads), may possibly be a viable solution. It certainly isn't perfect and I can't guarantee that it'll catch everything, but by far, it's certainly better than using nothing at all.
Ideally, as per already suggested above by Matt, making a call to shell to have ClamScan scan the file uploads is definitely an ideal solution, and if this is something that a hostmaster, webmaster or anyone in Rohit's situation is able to do, I'd second that suggestion wholly. What I've written, because it is a PHP script, has limitations inherent to anything that relies wholly on PHP in order to function, but, in instances where the aforementioned suggestion and/or similar suggestions aren't a possibility (such as if the host machine doesn't have an anti-virus installed and shell access is disabled; common with cheaper shared hosting solutions), that's where what I'm suggesting here could potentially step in - Something that only requires PHP to be installed (with PCRE extension included, which is standard with PHP nowadays anyhow), and nothing more.
Also remember, as Matt has already suggested, to always upload outside of your root directory, to ensure that uploaded files can't be exploited by attackers (such as in the event of an attacker attempting to compromise your system by uploading backdoors or trojans) - Viruses are not the only threat you need to worry about, and the vast majority of anti-virus solutions nowadays do not solely focus on viruses. Matt is also entirely correct in pointing out that no anti-virus solution is perfect, and for that reason, anyone allowing file uploads to their website or server needs to remain vigilant - An anti-virus solution is a must-have for anyone in that situation, but no holy grail of internet security that'll cover every possible threat exists. Also, renaming files isn't only about ensuring that they can't execute (as may be somewhat inferred by the original poster's reply comment regarding EXEs) - The risk of threats such as directory traversal attacks can be reduced by renaming files as well as the risk associated with an attacker attempting to override an already existing file on a targeted system as a means to hide their dirty-work.
Regarding the threat of files that may be malicious being missed by an anti-virus solution and then potentially infecting the system where they are being uploaded to; What a hostmaster or webmaster could potentially do in this situation is employ some sort of quick and simple encoding process that'd render the file non-executable by the system itself, but which can be easily and readily reversed by the PHP script responsible for calling that file on request, such as by way of using base64_encode(), bin2hex(), or even by just rotating a few characters and adding a salt to displace the file's magic number or something similar.
Which configuration management tool is the best for FPGA designs, specifically Xilinx FPGA's programmed with VHDL and C for the embedded (microblaze) software?
There isn't a "best", but configuration control solutions that work for software will be OK for FPGAs - the flow is very similar. I use Subversion at work and git at home, and wrote a little on 'why' at my blog.
In other answers, binary files keep getting mentioned - the only binary files I deal with are compilation products (equivalent to software object and executables), so I don't keep them in the version control repository, I keep a zipfile for each release/tag that I create with all the important (and irritatingly slow to reproduce) ones in.
I don't think it much matters what revision control tool you use -- anything that you would consider good in general will probably be OK here. I personally use Git for a sizable Verilog + software project, and I'm quite happy with it.
What will bite you in the ass -- no matter what version control you use -- is this: The Xilinx tools don't generally respect a clean division between "input" and "output" or between (human edited) "source" and (opaque) "binary." Many of the tools like to store some state information, like a last-run time or a hash value, in their "input" files meaning that you'll get lots of false changes. Coregen does this to its .xco files, and project navigator (the main GUI) does this to its .xise files. Also, both tools have a habit of inserting or removing lines for default-valued parameters, seemingly at random.
The biggest issue I've encountered is the work-flow with Coregen: In many cases, at least one of the following is true:
You have to manually edit the HDL files produced by Coregen.
The parameters that went into Coregen are stored somewhere other than the .xco file (usually in what looks like an output file).
You have to copy-and-paste the output from Coregen into your top-level design.
This means that there is no single logical source/master location for your input to the core-generating process. So even if you have the .xco file under version control, there's no expectation that the design you're running corresponds to it. If you re-generate "the same" core from its nominal inputs, you probably won't get the right outputs. And don't even think about merging.
I suggest CM tools that support version labeling and binary files. Most Software CM applications are fine with ASCII text files. They may just store a "difference" file rather than the entire file for updates.
My recommendations: PVCS, ClearCase and Subversion. DO NOT USE Microsoft SourceSafe. I don't like it because it only supports one label per revision.
I've seen Perforce and Subversion used in a couple of FPGA-intensive companies.
We use Perforce, and its great. You can have your code that lives in Linux-land checked in side-by-side with your Specs and Docs that live in Windows-land. And you get branching, labels, etc.
I've seen everything from Clearcase to RCS used, and it is really all okay for this kind of thing. The important thing is to get a good set of check-in policies established for your group, and make sure they stick to it.
And have automated nightly regressions. That way, when someone breaks the rules, they can be identified and publicly shamed.
I have personally used Perforce, Subverion, git and ClearCase for FPGA projects. Since VHDL and C are just text files, any works fine. However be sure to capture the other project and contraint files and any libraries you use.
Also think about what to do with the outputs, e.g. log file and bitstreams. Both tend to be big and the bitstreams are binaries.
Previously I used Subversion but have switched to git two years ago. Git handles FPGA design files just as well as it handles every other text and binary file. Git is all you need for version controlling your files and artifacts.
For building the designs, I recommend just using a single ISE project called "ise" (living in a subdirectory called "ise/"). You can take a look at my (very modest) FPGA open-source project on github for the file layout. I don't bother storing the ISE files at all since they are easy to regenerate. The only things I save are the Verilog files and some ISIM waveform config files. In other projects that use coregen I save the coregen.cgp project file and all of the *.xco scripts for regenerating cores. Then I use a Makefile for actually running coregen on the *.xco files. There are a few other Xilinx-specific files you should version control too: *.ucf, *.coe, *.xcf, etc.
I experimented with using Makefiles and the Xilinx command-line tools but found that ISE did a much better job tracking dependencies and calling the tools with the right arguments. Just don't make the mistake of trying to version control your ise/ project files or you will go mad. Xilinx has something like 300 different file types which change every release. If you want to save a file, you can try the ISE project file itself with a .xise extension. Anything that is hard to recreate, like the golden bitfile that you know works and took 6 hours to build, you might want to copy that and configuration manage it explicitly.