Obsfucation of code in commercial product - obfuscation

Yesterday, my manager asked me to find and remove all references to 'previous incarnation of company' that appear in the binaries we produce for a product that we're launching in a few weeks. This got me wondering why, in a compiled stand-alone binary, there's so much human-readable content, and whether there's a simple way to obsfucate it so that the program's internals aren't hanging out in the open, so to speak (at least to anyone who opens it with a text editor or greps the file contents). Here are some examples of what I mean:
"WGL_3DFX_multisample À # ð>Unknown OpenGL error
GL_INVALID_FRAMEBUFFER_OPERATION"
" Unable to close due to unfinalised statements not an error SQL logic error or missing database access permission denied callback requested query abort database is locked database table is locked out of memory attempt to write a readonly database interrupted disk I/O error database disk image is malformed database or disk is full unable to open database file table contains no data database schema has changed String or BLOB exceeded size limit constraint failed datatype mismatch"
"flowChartDelay flowChartDisplay flowChartDocument flowChartExtract flowChartInputOutput flowChartInternalStorage flowChartMagneticDisk"
The majority of the file is human incomprehensible stuff like this, which is more what I'd expect from a binary:
"âÀÿ? ‰•þÿÿÇ…”þÿÿ ë‹…”þÿÿƒÀ‰…”þÿÿ‹”þÿÿ;Mà}`‹U‹‚¨ ‹”þÿÿ¶ƒúuF‹E‹ˆ° ‹•”þÿÿ·Q¯…ŒþÿÿÁ艅Œþÿÿ‹M"
I figured out I could simply do a search and replace for 'string that we don't want' and replace it with random text of the same length and the program would run fine, which is possibly easier than making 500 edits to our source to bring it up to date with the current status of the company as a legal entity (there are a tonne of functions called name_of_previous_company_foo()), and also easier than trying to integrate some exotic obfuscation utility into our complex and propritary build system, but it's not an especially elegant solution, and I'd still like to know if there's a way to make our binaries into something more like a black box, where someone can't just open it with a text editor and see our function and class names.

People build source code obfuscator tools for "commercial software" that can scramble strings and identifiers so they aren't easily read out of the binary, but are still usuable as the strings they intend to be.
Such obfuscators tend to be language specific, because they have to handle the fine detail of the language structure.
Google "Source Code Obfuscators" and you'll find many.

Related

Salesforce: Merge 3 Permission Sets into 1

I have 3 'old' Permission Sets (PS1, PS2 and PS3) which need to be merged into a Permission Set #4 (PS4).
PS1, PS2 and PS3 will be deprecated after adding its respective permissions into PS4. PS4 will remain as the future Permission Set which will gather ALL the permissions for a specific set of Users.
For now, I see that this is a very manual task ("Eye-ball" comparing each PS1, PS2, PS3 with PS4 and adding the missing permissions into PS4) and, as all manual tasks, it is prone to errors.
QUESTIONS:
Can you suggest a tool to COMPARE Permission Sets to make sure I am not missing any permission?
or (even better)
Can you suggest a tool to MERGE Permission Sets in a safe way (to mitigate risk of errors)?
or
Would you recommend a "best approach" or "best practice" for this task?
Thank you very much.
Developer way
You'd need a developer to connect with sfdx (if commandline is scary - there's VSCode editor) or similar tool and download "metadata". And then compare the XML files using something like WinMerge
https://trailhead.salesforce.com/content/learn/projects/quickstart-vscode-salesforce might help if you've never done it and don't have a developer handy.
Profiles and permission sets can be very big, what's being downloaded depends on what else you're downloading. Define "everything". If you indicate in "package.xml" that you want all objects, classes and permission sets - the permission set file should include checkboxes for "Apex Class Access", field level security, allowed record types etc - but might not include "Visualforce Page Access", tab visibilities etc because you didn't include them). There's cool plugin to VSCode for building the "package.xml" file for you, picking what you need.
Once you have that you could load them up in Winmerge (or any "diff tool" you like) and compare up to 3 files. It takes a while to get used to (you could start with comparing two, not 3).
You'll see an overview of changed lines on the left and you can decide to say make leftmost file the merged one. Go line by line and add permissions as you see them. You could then save the final file as 4th perm set and use same sfdx/vscode to deploy it.
Analyst way
If you feel like Excel guru... This data should be queryable so you could export it and crack some comparisons that way. Again - the checkboxes would be spread across different tables so you'd need to compare object rights, then field level security, then class access, then...
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_erd_profile_permissions.htm
This would be a start
SELECT Parent.Name, SobjectType, PermissionsRead, PermissionsCreate, PermissionsEdit, PermissionsDelete
FROM ObjectPermissions
WHERE SobjectType IN ('Account', 'Contact', 'Case') AND Parent.Name IN ('PS1', 'PS2', 'PS3')
ORDER BY SobjectType, Parent.Name
It's very ungrateful job because you'd need to write formulas across rows or pivot it somehow... Also note my PS2 didn't have access to Cases at all - SF doesn't bother holding a row with all false, it just isn't there.
¥€$ way
Money solves everything, eh? Deployment & backup tools like OwnBackup, Gearset, Copado etc have something for detecting changes between projects on disk and orgs... You could rename PS2 to PS1 in another sandbox and make the tool compare them? (I'm not affiliated with any such tool vendor)
There's also https://perm-comparator.herokuapp.com/ if you're not afraid 3rd party app will get sysadmin access to your org (haven't used personally, just Googled it)
Ages ago my colleague got promising exports out of Config Workbook. Again - haven't used personally, screenshots look nice.

Cross Referencing Data Written to a text file with a existing Database in Delphi?

Im trying to cross reference data written to a text file With a Existing Database IE (check if the data written to the text file already exists in the database).
I have already created the program that writes the users login data (Name and Password) to a text file then i have started to write a algorithm to read data from the text file,but i am a bit stuck i have the Name Stored in the first line of the text file and the password (String values only) stored in the next line.
I have no idea how you would check if this data is already existing in the database,would you need to first extract the contents of the database first? or could you just cross reference it directly with the Database? I Have already created the Database(UserData.accdb) but i have not yet linked it up to the Form. This is what i have so far:
procedure TForm1.btnclickClick(Sender: TObject);
var
tRegister : TextFile;
Sline : String;
Sname,SPword : String;
begin
Assignfile(tRegister,'register.txt');
Try
Reset(tRegister);
except
Showmessage('File Register.txt does not exist');
Exit;
end;
While not EOF(tRegister) do
ReadLn(tRegister,Sline);
Sname:=Copy(Sline);
// This is where i want to add code
end;
end;
end.
Please don't be to harsh i am still new to Delphi :)
I understand from your question that you're currently stuck trying to check if a particular record exists in your database. I'll answer that very briefly because there are plenty similar questions on this site that should help you flesh out the detail.
However the title of your question asks about "Cross Referencing Data Written to a text file with a existing Database". From the description it sounds as if you're trying to reconcile data from two sources and figure what matches and what doesn't. I'll spend a bit more time answering this because I think there'll be more valuable information.
To check for data in a database, you need:
A connection component which you configure to point to your database.
A query component linked to the connection component.
The query text will use a SQL statement to select rows from a particular table in your database.
I suggest your query be parametrised to select specifically the row you're looking for (I'll explain why later.)
NOTE: You could use a table component instead of a query component and this will change how you check for existing rows. It has the advantage you won't need to write SQL code. But well written SQL will be more scalable.
The options above vary by what database and what components you're using. But as I said, there are many similar questions already. With a bit of research you should be able to figure it out.
If you get stuck, you can ask a more specific question with details about what you've tried and what's not working. (Remember this is not a free "do your work for you service", and you will get backlash if it looks like that's what you're expecting.)
Reconciling data between text file and database:
There are a few different approaches. The one you have chosen is quite acceptable. Basically it boils down to:
for each Entry in TheFile
.. if the Entry exists in TheDatabase
.. .. do something with Entry
.. .. otherwise do something else with Entry
The above steps are easy to understand, so it's easy to be confident the algorithm is correct. It doesn't matter if there aren't one-liners in Delphi to implement those steps. As a programmer, you have the power to create any additional functions/procedures you need.
It is just important that the structure of the routine be kept simple.
Any of the above steps that cannot be very trivially implemented, you then want to break down into smaller steps: 2.a. 2.b. ; 3.a. 3.b. 3.c. ; etc. (This is what is meant by top-down design.)
TIP: You want to convert all the different breakdowns into their own functions and procedures. This will make maintaining your program and reusing routines you've already written much easier.
I'm going to focus on breaking down step 2. How you do this can be quite important if your database and text files grow quite large. For example you could implement so that: every time you call the function to check "if Entry exists", it looks at every single record in your database. This would be very bad because if you have m entries in your file and n entries in your database you would be doing m x n checks.
Remember I said I'd explain why I suggest a parametrised query?
Databases are designed and written to manage data. Storing and retrieving data is their primary function, so let it do the work of finding out if the entry you're looking for exists. If for example you wrote your query to fetch all entries into your Delphi app and search there:
Increase the memory requirements of your application.
But more importantly, without extra work, expose yourself to the m x n problem mentioned above.
With a parametrised query, each time if EntryExists(...) is called you can change the parameter values and effectively ask the database to look for the record. The database does the work, and gives you an answer. So you might for example write your function as follows:
function TForm1.EntryExists(const AName: string): Boolean;
begin
qryFindEntry.Close;
qryFindEntry.Parameters.ParamByName('EntryName').Value := AName;
qryFindEntry.Open;
Result := qryFindEntry.RecordCount > 0;
end;
TIP: It will be very important that you define an index on the appropriate columns in your database, otherwise every time you open the query, it will also search every record.
NOTE: Another option that is very similar would be to write a stored procedure on your database, and use a stored procedure component to call the database.
Additional comments:
Your routine to process the file is hard-coded to use register.txt
This makes it not-reusable in its current form. Rather move the code into a separate method: procedure ProcessFile(AFileName: string);. Then in your button click event handler call: ProcessFile('register.txt');.
TIP: In fact it is usually a good idea to move the bulk your code out of event handlers into methods with appropriate parameters. Change your event handler to call these methods. Doing this will make your code easier to maintain, test and reuse.
Your exception handling is wrong
This is an extremely bad way to do exception handling.
First, you don't want to ever write unnecessary exception handling. It just bloats your code making it more difficult to read and maintain. When an exception is raised:
The program starts exiting code to the innermost finally/except block. (So an exception would already exit your routine - as you have added code to do.)
By default, an unhandled exception (meaning one you haven't swallowed somewhere) will be handled by the application exception handler. By default this will simply show an error dialog. (As you have added code to do.)
The only change your code makes is to show a different message to the one actually raised. The problem is that you've made an incorrect assumption. "File not exists" is not the only possible reason Reset(tRegister); might raise an exception:
The file may exist, but be exclusively locked.
The file may exist, but you don't have permission to access it.
There may be a resource error meaning the file is there but can't be opened.
So the only thing all your exception handling code has done is introduce a bug because it now has the ability to hide the real reason for the exception. Which can make troubleshooting much more difficult.
If you want to provide more information about the exception, the following is a better approach:
try
Reset(tRegister);
except
on E: Exception do
begin
//Note that the message doesn't make any assumptions about the cause of the error.
E.Message := 'Unable to open file "'+AFileName+'": ' + E.Message;
//Reraise the same exception but with extra potentially useful information.
raise;
end;
end;
The second problem is that even though you told the user about the error, you've hidden this fact from the rest of the program. Let's suppose you've found more uses for your ProcessFile method. You now have a routine that:
Receives files via email messages.
Calls ProcessFile.
Then deletes the file and the email message.
If an exception is raised in ProcessFile and you swallow (handle) it, then the above routine would delete a file that was not processed. This would obviously be bad. If you hadn't swallowed the exception, the above routine would skip the delete step because the program is looking for the next finally/except block. At least this way you still have record of the file for troubleshooting and reprocessing once the problem is resolved.
The third problem is that your exception handler is making the assumption your routine will always have a user to interact with. This limits reusability because if you now call ProcessFile in a server-side application, a dialog will pop up with no one to close it.
Leaving unresolved exceptions to be handled by the application exception handler means that you only need to change the default application exception handler in the server application, and all exceptions can be logged to file - without popping up a dialog.

How to attach and view pdf documents to access database

I have a very simple database in access, but for each record i need to attach a scanned in document (probably pdf). What is the best way to do this, the database should not just link to a file on the pc, but should copy and keep the file with it, meaning if the original file goes missing the database is moved or copied, the file should still be accessable from within the Database. Is This possible? and what is the easiest way of doing it? If is should i can write a macro, i just dont know where to start. and also when i display a report of the table, i would like to just see thumbnails of the documents.
Thank you.
As the other answerers have noted, storing file data inside a database table can be a questionable practice. That said, I wouldn't personally rule it out, though if you are going to take that option, I'd strongly suggest splitting out the file data into its own table in its own backend file. For example:
Create a new database file called Scanned files.mdb (or Scanned files.accdb).
Add a single table called Scans with fields such as FileID (AutoNumber, primary key), MainTableID (matches whatever is the primary key of the main table in the main database file), FileName (Text), FileExt (Text) and FileData ('OLE object', really just a BLOB - don't actually use OLE Objects because they will bloat the database horribly).
Back in the frontend, add a reference to Scans as a linked table.
Use a bit of VBA to upload and extract files from the Scans table (if you're interested in the mechanics of this, post a separate question).
Use the VBA Shell routine (if you must) or ShellExecute from the Windows API (= the better option IMO) to open extracted data.
If you are using the newer ACCDB format, then you have the 'attachment' field type available as smk081 suggests. This basically does most of the above steps for you, however doing things 'by hand' gives you greater flexibilty - for example, it allows giving each file a 'DateScanned' or 'DateEffective' field.
That said, your requirement for thumbnails will require explicit coding whatever option you take. It might be possible to leverage the Windows file previewing API, though I'd be certain thumbnails are a definite requirement before investigating this - Access VBA is powerful enough to encourage attempts at complex solutions, but frequently not clean and modern enough to allow fulfilling them in a particularly maintainable fashion.
There is an Attachment type under Data Type when you go into Design View of your table. You can add an attachment field here. When you go into the Datasheet view of the table you can select this field for a particular row and a window will open for you to specify the attachment. This will cause your database to quickly grow in size if you add a lot of large attachments.
You can use an OLE field in a table, but I would really suggest you not use this approach. The database is going to be HUGE in no time, and you're going to regret it.
Instead, you should consider adding a field that stores the path to the file, and keep the files in one folder on your network. Then you can use a SHELL() command to open the file. What's the difference between restoring an Access database and restoring PDF files if something goes wrong? This will keep your database at a manageable size and reduce the possibility of corruption.

Better way to store updatable scientific data?

I am using a file consisting of published scientific data. I'm using this file with a program that reads in the first 5 space delimited data fields, and everything after that is considered a comment by the program.
2 example lines (of thousands):
FeII 1608.4511 0.521 55.36 -1300 M03 Journal of Physics
FeII 1611.23045 0.0321 55.36 1100 01J AJ
The program reads it as:
FeII 1608.4511 0.521 55.36 -1300
FeII 1611.23045 0.0321 55.36 1100
These numbers are each measurements and most (don't get me started) have associated errors that are not listed in this file. I would like to store this information in a useful and updatable way. That is, say the first entry FeII 1608.4511 has an error of plus/minus 0.002. Consider when a new measurement is made and changes it to: FeII 1608.45034 plus/minus 0.0005. I would like to update the value, the error, and record some information about the publication that it came from.
The program that uses this file is legacy code and is both crucial and inflexible: and it needs the file to look like the above output when it's read in. I would really like for there to be a way to update the input file to include things like errors on the values and publication hyperlinks in comments. I would also like a kind of version control ability to return the state of this large file today; or in 5 months after 20 more lines are updated with new values.
Any suggestions on how best to accomplish this? Should I store everything in some kind of database?
Databases are deeply tied to identity. If a database can't identify a row by the data that's in it, a database isn't going to help you.
If I were you, I'd start by storing the base file in a version control system, not a database. At 20 changes per 5 months, I'd probably make those changes manually and commit each batch of changes. (I don't know what might constitute a batch for you. Could be a single change every time.)
Since the format of the existing file is both crucial and brittle, I'm not sure whether modifying it is a good idea. I think I'd feel better about storing error ranges and publication hyperlinks in a separate file, and using a script to put the pieces together for applications that can use error ranges and hyperlinks.
A database sounds sensible, SQL Server Express is free and widely used.
You can read in the text file including all comments and output the edited data in the same format. You can use a number of front ends including Access, for rapid development, or something you create yourself in VB.Net, or even Excel, at a pinch.
You will need to consider the structure of the table(s) but it should not be too difficult, and you can get help here.
For updating the information in the file introducing errors and links, you don't need any database; just open the file, iterate through the lines and update each one.
If you want to be able to restore a line state, you definetively need some kind of database. You can create a database in Sql Server or Firebird for example, and store in it a row for each line historical state (with date of creation off course); your file itself would be the repository for current values and you would be able to restore the file with a date and some simple fetcing of the database information.
If you can't use a database like Firebird or SQL Server, you can store the historical data in a simple text file, it's up to you. Just remember that you necesarely will need, like #CatCall commented, a way to identify each line in order to create a relation between the line in the file and the historical data stored in your repository.

What FoxPro data tools can I use to find corrupted data?

I have some SQL Server DTS packages that import data from a FoxPro database. This was working fine until recently. Now the script that imports data from one of the FoxPro tables bombs out about 470,000 records into the import. I'm just pulling the data into a table with nullable varchar fields so I'm thinking it must be a weird/corrupt data problem.
What tools would you use to track down a problem like this?
FYI, this is the error I'm getting:
Data for source column 1 ('field1') is not available. Your provider may require that all Blob columns be rightmost in the source result set.
There should not be any blob columns in this table.
Thanks for the suggestions. I don't know if it a corruption problem for sure. I just started downloading FoxPro from my MSDN Subscription, so I'll see if I can open the table. SSRS opens the table, it just chokes before running through all the records. I'm just trying to figure out which record it's having a problem with.
Cmrepair is an excellent freeware utility to repair corrupted .DBF files.
Have you tried writing a small program that just copies the existing data to a new table?
Also,
http://fox.wikis.com/wc.dll?Wiki~TableCorruptionRepairTools~VFP
My company uses Foxpro to store quite a bit of data... In my experience, data corruption is very obvious, with the table failing to open in the first place. Do you have a copy of foxpro to open the table with?
At 470,000 records you might want to check to see if you're approaching the 2 gigabyte limit on FoxPro table size. As I understand it, the records can still be there, but become inaccessible after the 2 gig point.
#Lance:
if you have access to Visual FoxPro command line window, type:
SET TABLEVALIDATE 11
USE "YourTable" EXCLUSIVE && If the table is damaged VFP must display an error here
PACK && To reindex the table and deleted "marked" records
PACK MEMO && If you have memo fields
After doing that, the structure of the table must ve valid, if you want to see fields with invalid data, you can try:
SELECT * FROM YourTable WHERE EMPTY(YourField) && All records with YourField empty
SELECT * FROM YourTable WHERE LEN(YourMemoField) > 200 && All records with a long memo field, there can be corrupted data
etc.
Use Repair Databases from my site (www.shershahsoft.com) for FREE (and Will always be FREE).
I have designed this program to repair damaged Foxpro/FoxBase/Dbase files. The program is very quick. It will repair 1 GB table in less than a minute.
You can asign files, and folders to the program. As you start the program it will mark all the corrupted files, and by clicking Repair or Check and Repair button, it will repair all the corrupted files. Moreover, it will create a folders "CorruptData" in the folders where the actual data exist, and will keep copies of the corrupt files there.
One thing to keep in mind, always run Windows CheckDsk on the drives where you store the files. Cause, when records are being copied to a table and power failure occures, there exists lost clusters which Windows converts to files during CheckDsk. After that, the RepairDatabases will do the job for you.
I have used many paid and free programs which repair tables, but all such programs leave extra records in the tables with embiguit characters (and they are time consuming too). The programer needs to find and delete such records manually. But Repair Databases actually recovers the original records, you need no further action. The only action you need is reindexing your files.
In the repair process some times File Open Dialog appears which asks to locate the compact index file for a table with indeces. You may click cancel the dialog at that point, the table will be repaired, however, you will need to reindex the file later. (this dialog may appear several times depending upon the number of corrupted indeces.)

Resources