I'm getting an odd result from an Applescript script being called from within Filemaker that I've not seen before nor have I found a reference to it online here or elsewhere.
The following script creates three new records and makes a call to the Filemaker script "paste_into_container". The FM script "paste_into_container is very simple, it pastes whatever is on the clipboard into a specific container field. When the script is called from within Filemaker, the "paste_into"container" subscript will only paste the contents of the clipboard into the last new record. The container field on the first two records is left blank.
It's almost as if the loop creates the new record, ignores the "paste_into_container" script, and then moves on to the next iteration of the loop.
The script works fine when called from ScriptEditor but fails when called from within Filemaker. The script will also work if I drop the repeat loop and create just one record.
I've tried increasing the delay added at the end of each loop but it does not make any difference whether it is 1 or 5 seconds.
I'm sure it's something simple but I'm not seeing it and after two days it's time for help (of one kind or another)
additional info:
Mac OS 10.6.8
FM 11 running through FM11 server
Thanks in advance
Phil
tell application "FileMaker Pro"
show every record in table "Image_Info" in database myDB
repeat with i from 1 to 3
go to layout 1 of database myDB
set myNewRecord to create new record in database myDB
go to last record
do script "paste_into_container"
delay 1
end repeat
end tell
there's a very good discussion on another forum specifically about this problem. When FM moved from FM10 to FM11 they changed how the program handled requests from imbedded applescripts. Essentially, since FM11 Filemaker treats Applescript commands asynchronously rather than synchoronously whenever the Applescript calls on an internal Filemaker script with the do script command. What this means is that Filemaker will allow the Applescript to continue without waiting for feedback from Filemaker saying the internal FM script is complete. There is a work around but it is hardly elegant and does not seem universal. See link below.
As I understand it this only affects embedded Applescripts calling on other local Filemaker scripts.
link
Related
For below script written in .sql files:
if not exists (select * from sys.tables where name='abc_form')
CREATE TABLE abc_forms (
x BIGINT IDENTITY,
y VARCHAR(60),
PRIMARY KEY (x)
)
Above script has a bug in table name.
For programming languages like Java/C, compiler help resolve most of the name resolutions
For any SQL script, How should one approach unit testing it? static analysis...
15 years ago I did something like you request via a lot of scripting. But we had special formats for the statements.
We had three different kinds of files:
One SQL file to setup the latest version of the complete database schema
One file for all the changes to apply to older database schema's (custom format like version;SQL)
One file for SQL statements the code uses on the database (custom format like statementnumber;statement)
It was required that every statement was on one line so that it could be extracted with awk!
1) At first I set up the latest version of the database by executing from statement after the other and logging the errors to a file.
2) Secondly I did the same for all changes to have a second schema
3) I compared the two database schemas to find any differences
4) I filled in some dummy test values in the complete latest schema for testing
5) Last but not least I executed every SQL statement against the latest schema with test data and logged every error again.
At the end the whole thing runs every night and there was no morning without new errors that one of 20 developers had put into the version control. But it saved us a lot of time during the next install at a new customer.
You could also generate the SQL scripts from your code.
Code first avoids these kinds of problems. Choosing between code first or database first usually depends on whether your main focus is on your data or on your application.
I am working on something which requires me to run an sql query to read a text file from a path but it has to display only few contents based on my conditions/requirements. I have read about using ROWSET/BULK copy but it copies the entire file but I need only certain data from the file.
Ex:
Line 1 - Hello, Good Morning.
Line 2 - Have a great day ahead.
Line 3 - Phone Number : 1112223333 and so on.
So, if I read this file and give the condition as "1112223333", it should display only the lines consisting of "1112223333".
NOTE: It should display the entire line of the matched case/condition
Is it possible to achieve this using an sql query? If so then please help me with this.
Thanks in advance.
Unfortunately what you're trying to do doesn't work with ROWSET. There is no way to apply a filter at read time. You're stuck with reading in the entire table. You can of course read into a temp table, then delete the rows. This will give you the desired end result, but you have to take the hit on reading in the entire table.
You may be able to generate a script to filter the file server side and trigger that with xp_cmdshell but you'd still need to take the performance hit somewhere. While this would be lower load on the SQL server, you'd just be pushing the processing elsewhere, and you'd still have to wait for the processing to happen before you could read the file. May be worth doing if the file is on a separate server and network traffic is an issue. If the file is on the same server, unless SQL is completely bogged down, I can't see an advantage to this.
I have a Microsoft Access Database file.
I wanted to delete records older than 5 years in it.
I made a backup before starting to modify the file.
I was able to run a query and then run the command below and append it or update it to the database file.
DELETE FROM Inspections Report WHERE Date <= #01/01/2013#
I used the example:
Delete by Date In Access
The records still seem to be in there.
My desired Output:
A analogy to what I am trying to do would be the bottom left corner of a Microsoft Word file where you see page 1 of 10 when it should say page 1 of 5 after deleting pages.
DELETE Table1.*, Table1.VisitDate
FROM Table1
WHERE (((Table1.VisitDate)<=#1/1/2013#));
I suggest you make the query object and save it, so it appears in the Navigation Pane and can be tested manually. [In which case you use Query Design View and don't need the syntax above]
Then use the OpenQuery method to fire that query.
To run a sequel command from Access VBA you need to preface it with DoCmd.RunSQL or CurrentDb.Execute, and then put your SQL coding in quotes.
Also, the space is probably causing an issue - if the table you're deleting records from is called "Inspections Report" you'd enclose both those words in square brackets to show its a single entity.
Finally "Date" is a special word in Access, and it doesn't like it when you use it as a field name, as it can cause problems when referencing that field later on. You might try something like "InspectionDate".
So your code would look like this:
DoCmd.RunSQL "DELETE FROM [Inspections Report] WHERE InspectionDate <=#1/01/2013#"
If you have a static date, you'd probably only need to complete this process once, which you could just do in the table by filtering - filter for before that date, use ctrl+a to select all that match that criteria, and hit delete. It will ask if you want to delete them, and you may see that the number of records it's trying to delete is only the number that satisfies your set criteria.
Of course, if you're interested in never having records older than a certain number of years, you could go for something in the original coding like > DATEADD("yyyy", -5, DATE()) and set it to execute every time you launch the database.
Im trying to cross reference data written to a text file With a Existing Database IE (check if the data written to the text file already exists in the database).
I have already created the program that writes the users login data (Name and Password) to a text file then i have started to write a algorithm to read data from the text file,but i am a bit stuck i have the Name Stored in the first line of the text file and the password (String values only) stored in the next line.
I have no idea how you would check if this data is already existing in the database,would you need to first extract the contents of the database first? or could you just cross reference it directly with the Database? I Have already created the Database(UserData.accdb) but i have not yet linked it up to the Form. This is what i have so far:
procedure TForm1.btnclickClick(Sender: TObject);
var
tRegister : TextFile;
Sline : String;
Sname,SPword : String;
begin
Assignfile(tRegister,'register.txt');
Try
Reset(tRegister);
except
Showmessage('File Register.txt does not exist');
Exit;
end;
While not EOF(tRegister) do
ReadLn(tRegister,Sline);
Sname:=Copy(Sline);
// This is where i want to add code
end;
end;
end.
Please don't be to harsh i am still new to Delphi :)
I understand from your question that you're currently stuck trying to check if a particular record exists in your database. I'll answer that very briefly because there are plenty similar questions on this site that should help you flesh out the detail.
However the title of your question asks about "Cross Referencing Data Written to a text file with a existing Database". From the description it sounds as if you're trying to reconcile data from two sources and figure what matches and what doesn't. I'll spend a bit more time answering this because I think there'll be more valuable information.
To check for data in a database, you need:
A connection component which you configure to point to your database.
A query component linked to the connection component.
The query text will use a SQL statement to select rows from a particular table in your database.
I suggest your query be parametrised to select specifically the row you're looking for (I'll explain why later.)
NOTE: You could use a table component instead of a query component and this will change how you check for existing rows. It has the advantage you won't need to write SQL code. But well written SQL will be more scalable.
The options above vary by what database and what components you're using. But as I said, there are many similar questions already. With a bit of research you should be able to figure it out.
If you get stuck, you can ask a more specific question with details about what you've tried and what's not working. (Remember this is not a free "do your work for you service", and you will get backlash if it looks like that's what you're expecting.)
Reconciling data between text file and database:
There are a few different approaches. The one you have chosen is quite acceptable. Basically it boils down to:
for each Entry in TheFile
.. if the Entry exists in TheDatabase
.. .. do something with Entry
.. .. otherwise do something else with Entry
The above steps are easy to understand, so it's easy to be confident the algorithm is correct. It doesn't matter if there aren't one-liners in Delphi to implement those steps. As a programmer, you have the power to create any additional functions/procedures you need.
It is just important that the structure of the routine be kept simple.
Any of the above steps that cannot be very trivially implemented, you then want to break down into smaller steps: 2.a. 2.b. ; 3.a. 3.b. 3.c. ; etc. (This is what is meant by top-down design.)
TIP: You want to convert all the different breakdowns into their own functions and procedures. This will make maintaining your program and reusing routines you've already written much easier.
I'm going to focus on breaking down step 2. How you do this can be quite important if your database and text files grow quite large. For example you could implement so that: every time you call the function to check "if Entry exists", it looks at every single record in your database. This would be very bad because if you have m entries in your file and n entries in your database you would be doing m x n checks.
Remember I said I'd explain why I suggest a parametrised query?
Databases are designed and written to manage data. Storing and retrieving data is their primary function, so let it do the work of finding out if the entry you're looking for exists. If for example you wrote your query to fetch all entries into your Delphi app and search there:
Increase the memory requirements of your application.
But more importantly, without extra work, expose yourself to the m x n problem mentioned above.
With a parametrised query, each time if EntryExists(...) is called you can change the parameter values and effectively ask the database to look for the record. The database does the work, and gives you an answer. So you might for example write your function as follows:
function TForm1.EntryExists(const AName: string): Boolean;
begin
qryFindEntry.Close;
qryFindEntry.Parameters.ParamByName('EntryName').Value := AName;
qryFindEntry.Open;
Result := qryFindEntry.RecordCount > 0;
end;
TIP: It will be very important that you define an index on the appropriate columns in your database, otherwise every time you open the query, it will also search every record.
NOTE: Another option that is very similar would be to write a stored procedure on your database, and use a stored procedure component to call the database.
Additional comments:
Your routine to process the file is hard-coded to use register.txt
This makes it not-reusable in its current form. Rather move the code into a separate method: procedure ProcessFile(AFileName: string);. Then in your button click event handler call: ProcessFile('register.txt');.
TIP: In fact it is usually a good idea to move the bulk your code out of event handlers into methods with appropriate parameters. Change your event handler to call these methods. Doing this will make your code easier to maintain, test and reuse.
Your exception handling is wrong
This is an extremely bad way to do exception handling.
First, you don't want to ever write unnecessary exception handling. It just bloats your code making it more difficult to read and maintain. When an exception is raised:
The program starts exiting code to the innermost finally/except block. (So an exception would already exit your routine - as you have added code to do.)
By default, an unhandled exception (meaning one you haven't swallowed somewhere) will be handled by the application exception handler. By default this will simply show an error dialog. (As you have added code to do.)
The only change your code makes is to show a different message to the one actually raised. The problem is that you've made an incorrect assumption. "File not exists" is not the only possible reason Reset(tRegister); might raise an exception:
The file may exist, but be exclusively locked.
The file may exist, but you don't have permission to access it.
There may be a resource error meaning the file is there but can't be opened.
So the only thing all your exception handling code has done is introduce a bug because it now has the ability to hide the real reason for the exception. Which can make troubleshooting much more difficult.
If you want to provide more information about the exception, the following is a better approach:
try
Reset(tRegister);
except
on E: Exception do
begin
//Note that the message doesn't make any assumptions about the cause of the error.
E.Message := 'Unable to open file "'+AFileName+'": ' + E.Message;
//Reraise the same exception but with extra potentially useful information.
raise;
end;
end;
The second problem is that even though you told the user about the error, you've hidden this fact from the rest of the program. Let's suppose you've found more uses for your ProcessFile method. You now have a routine that:
Receives files via email messages.
Calls ProcessFile.
Then deletes the file and the email message.
If an exception is raised in ProcessFile and you swallow (handle) it, then the above routine would delete a file that was not processed. This would obviously be bad. If you hadn't swallowed the exception, the above routine would skip the delete step because the program is looking for the next finally/except block. At least this way you still have record of the file for troubleshooting and reprocessing once the problem is resolved.
The third problem is that your exception handler is making the assumption your routine will always have a user to interact with. This limits reusability because if you now call ProcessFile in a server-side application, a dialog will pop up with no one to close it.
Leaving unresolved exceptions to be handled by the application exception handler means that you only need to change the default application exception handler in the server application, and all exceptions can be logged to file - without popping up a dialog.
In SQL Server Management Studio (SSMS) running against SQL Server 2005, I have a solution which contains a number of views.
These views are not sorted alphabetically.
Can anyone provide either an explanation of why, or a solution to order them alphabetically ?
I just came across this forum post. It doesn't get any simpler.
Just edit the ssmssqlproj file.
The file for my project (SQL Main) is
located in "My Documents\SQL Server
Management Studio\Projects\SQL
Main\SQL Main\SQL Main.ssmssqlproj".
Its just an xml file. Change the
following line
<LogicalFolder Name="Queries" Type="0" Sorted="true">
to
<LogicalFolder Name="Queries" Type="0" Sorted="false">
It will revert back to true so you
need to repeat this if you make
changes. THere is probably a better
way Smile
There is a tool that you can install to sort the contents of a SQL Server Solution project.
See the following reference.
http://web.archive.org/web/20121019155526/http://www.sqldbatips.com/showarticle.asp?ID=78
Please ensure you save your work, before attempting a sort.
When you add new item to the project they are added to the end of the list. They are kept in the order that they were added to the project because this order is preserved in the corresponding *.ssmssqlproj file. To change this order, close the project/solution, then locate the *.ssmssqlproj file and edit it with Notepad or your favorite XML editor (always make a backup first!). Reorder the FileNode elements along with their children to reorder the items appearance in the Solution Explorer.
Here's the solution I took. Up votes for the recommendation of the source code, however I don't have .NET installed here, so I had to go for a manual approach.
finish working on any project or solution files you have checked out, and check these edits in.
get latest version on everything in the solution.
Check everything out
Backup the whole folder structure
open the .ssmssqlproj file in notepad
maximise notepad full screen and turn word-wrapping off
edit the .ssmssqlproj file, reordering the XML nodes in the required order
save the .ssmssqlproj file
check everything back in.
That seems to have fixed my issue.
The stored procedure listed in the following post can do it as well:
http://beyondrelational.com/blogs/jacob/archive/2009/08/22/xquery-lab-48-sorting-query-files-in-sql-server-management-studio-ssms-solution-project.aspx
Please note that the sorting is case-sensitive.
So "B..sql" will come before "a...sql".
Remember to start all your scripts with the same casing (be it lower or upper).
Another easier (I think) way to edit the ssmssqlprog is to do so with MS XML Notepad 2007. With that I could drag the nodes around to order them. Make a copy of course. 2007 appears to be the latest version and is available at ...
http://www.microsoft.com/en-us/download/details.aspx?id=7973
Another way to handle this is to start with a new query from Management Studio (not by selecting 'New Query' in the solution).
Save the query file into the solution folder with a good name.
Then 'Add Existing Item' into the solution.
It adds with your chosen name, sorted correctly instead of creating the file initially as 'SQLQuery1'.
Maybe not much better, but another option to editing the project file and reopening.
Building on the previous answer, you can add it as a new query in the solution, and when you are done, just remove it from the solution and then Add existing item to the solution, and select your new query. Doing it this way adds the new query to the correct solution folder for you before you remove and add back in.
And yet another way... while one of these ways worked for me several times... sometimes it becomes stubborn. Find the location where it is broken. Take a screenshot of the file names.
Select all the file names you need to remove before the glitch.
in other works, if it sorts a - f and then sorts b - z
Remove all the a-f (don't delete them) that occur before the b-z, then save the project.
Now add them back in and save the project again. Presto.
So far this has worked very well for me and is fairly easy to do.