Windev - Create Database management (Back Office) - database

We have project tasked to the team using Windev Mobile for android platform, the task is to create table control on a window (Just one window) and the table must able to show the record that retrived from query or datafile. We still not able to find solution to make the table control capable adding column and row automaticliy acording to datafile
To elaborate on what I meant, my team and I are trying to create a database manager app in Windev Mobile (for HFSQL), what we're tasked with is to make 2 windows, one window to choose which data table (the files within the analysis which exist in the database), and another window that will be used to manage (Create, Read, Update, Delete) the data within the table.
We have been able to create the first window, but the second window is what is currently confusing us, because we need to make it possible to use a window that contains a table that would expand/decrease the amount of columns based on the data table we loaded (for example "client data" data table has 5 data headers (client_name, client_address, client_phoneNum, client_type, client_eMail) , so the table should have 5 columns, and providers data table has 10 data headers so the table should have 10 columns) like how excel would act when we open different excel files
Right now we able to create flexible table, it will display table and column from difrent datafile. We are using BuildBrowsingTable to display column and FileToMemoryTable to display the record. The table control will adjust itself acording datafile we select. But it will cause problem when we try manage the data on the table (add, modify, delete)
FOR i = 1 TO arrAdd.Count()
q.arrAdd = edt
END
Note : q (datafile), arrAdd (is column on table) and edt (Edit Control value)
unfortunately it not allow us use "." on this line
q.arrAdd = edt // '.' operator not allowed on Unicode String Element
Any advice, how to fix this?

If anyone have same problem like this, currently there no easy solution for this. We have finish the project and the only way to do this with Windev 24 is by
Create table control (Dispay_Datafile) that contain list of datafile name on the analysis.
When row selected (Display_Datafile) the data will displayed on other table control (Display_Record). To do this you have to create if condition to check if string on selected row is the same with Datafile name, then you can have it display the record from query/datafile that you declare inside if condition. Currently there no solution to do looping here, you have to create if statement for every datafile.
From here you can access query/datafile to display the record on Edit Control
This is how we do it right now, I hope it will help you

Related

How to copy a record from Excel and paste into SSMS V 17 to update existing record

Folks,
I am a novice, really a business user (not techie), in SSMS v 17. I used to update a MS Sql Server database in Azure using Excel add-in (Devart), but then Active Directory Multifactor Authentication was introduced at the organization and I cannot use Devart any longer. I am forced to use SSMS v 17.5 to make updates to the database through that grid.
I found a few old (circa 2008) solutions for creating new records using Copy Paste from Excel. Similarly, is there a way to "update" an existing record in the database by copying a row from Excel and pasting into that "grid"? I have some 60+ and 70+ year olds in the team, and would really like a simple solution that could cater to all ages. Right now, we scroll right and left, furiously, inside the grid trying to ensure that we are updating the correct row of record.
I sincerely appreciate your guidance in this regard.
One way you can do this, though I do not recommend is:
1) Right click the table you want to edit, select edit top 200 rows.
2) Right click the screen that pops up, go to Pane -> SQL
3) Change this to show the records you want to edit and column in the order you want and it should work.
Big problem here and why I wouldn't suggest this, is if anything is wrong you could really mess stuff up, also it will lock the rows you are looking at which could affect other things. A better "work around" would be to create an excel function and use a template. Example : ="Update "&B1&" Set "&C1&" = '"&D1&"' where ID = "&A1 ... Essecially letting Excel write the dynamic sql. In this case B1 would be the table to update, Cell C1 would be the column to update, D1 would be the new value and A1 would be the unique identifier. I do this often when someone sends me a list of 1000 items and I need to put them in a temp table or something.

SSIS - how to use lookup to add extra columns within data flow

I have a csv file containing the following columns:
Email | Date | Location
I want to pretty much throw this straight into a database table. The snag is that the Location values in the file are strings - eg: "Boston". The table I want to insert into has an integer property LocationId.
So, midway through my data flow, I need to do a database query to get the LocationId corresponding to the Location. eg:
SELECT Id as LocationId FROM Locations WHERE Name = { location string from current csv row }
and add this to my current column set as a new value "LocationId".
I don't know how to do this - I tried a lookup, but this meant I had to put the lookup in a separate data flow - the columns from my csv file don't seem to be available.
I want to use caching, as the same Locations are repeated a lot and I don't want to run selects for each row when I don't need to.
In summary:
How can I, part-way through the data flow, stick in a lookup transformation (from a different source, sql), and merge the output with the csv-derived columns?
Is lookup the wrong transformation to use?
Lookup transformation will work for you, it has caching and you can persist all input columns to the output and add columns from the query you use in lookup transformation.
You can also use merge join here, in some cases it is better solution, however it brings additional overhead because it requres sorting for its inputs.
Check this.
Right click on look up transformation -> go to show advanced editor -> go to Input and output properties.
here you can add new column or you can change data type of existing columns.
for more info how to use look up Click Here
Open the flat file connection manager, go to the Advanced tab.
Click "New" to add the new column and modify the properties.
Now go back to the Flat File Destination output, right click > Mappings > map the lookup column with the new one.

Load data from multiple source into a destination

I have a desktop application through which data is entered and it is being captured in MS Access DB. The application is being used by multiple users(at different locations). The idea is to download data entered for that particular day into an excel sheet and load it into a centralized server, which is an MSSQL server instance.
i.e. data(in the form of excel sheets) will come from multiple locations and saved into a shared folder in the server, which need to be loaded into SQL Server.
There is a ID column with IDENTITY in the MSSQL server table, which is the primary key column and there are no other columns in the table which contains unique value. Though the data is coming from multiple sources, we need to maintain single auto-updating series(IDENTITY).
Suppose, if there are 2 sources,
Source1: Has 100 records entered for the day.
Source2: Has 200 records entered for the day.
When they get loaded into Destination(SQL Server), table should have 300 records, with ID column values from 1 to 300.
Also, for the next day, when the data comes from the sources, Destination has to load data from 301 ID column.
The issue is, there may be some requests to change the data at Source, which is already loaded in central server. So how to update the data for that row in the central server as the ID column value will not be same in Source and Destination. As mentioned earlier ID is the only unique value column in the table.
Please suggest some ides to do this or I've to take up different approach to accomplish this task.
Thanks in advance!
Krishna
Okay so first I would suggest .NET and doing it through a File Stream Reader, dumping it to the disconnected layer of ADO.NET in a DataSet with multiple DataTables from the different sources. But... you mentioned SSIS so I will go that route.
Create an SSIS project in Business Intelligence Development Studio(BIDS).
If you know for a fact you are just doing a bunch of importing of Excel files I would just create many 'Data Flow Task's or many Source to Destination tasks in a single 'Data Flow Task' up to you.
a. Personally I would create tables in a database for each location of an excel file and have their columns map up. I will explain why later.
b. In a data flow task, select 'Excel Source' as the source file. Put in the appropriate location of 'new connection' by double clicking the Excel Source
c. Choose an ADO Net Destination, drag the blue line from the Excel Source to this endpoint.
d. Map your destination to be the table you map to from SQL.
e. Repeat as needed for each Excel destination
Set up the SSIS task to automate from SQL Server through SQL Management Studio. Remember you to connect to an integration instance, not a database instance.
Okay now you have a bunch of tables right instead of one big one? I did that for a reason as these should be entry points and the logic to determinate dupes and import time I would leave to another table.
I would set up another two tables for the combination of logic and for auditing later.
a. Create a table like 'Imports' or similar, have the columns be the same except add three more columns to it: 'ExcelFileLocation', 'DateImported'. Create an 'identity' column as the first column and have it seed on the default of (1,1), assign it the primary key.
b. Create a second table like 'ImportDupes' or similar, repeat the process above for the columns.
c. Create a unique constraint on the first table of either a value or set of values that make the import unique.
c. Write a 'procedure' in SQL to do inserts from the MANY tables that match up to the excel files to insert into the ONE 'Imports' location. In the many inserts do a process similar to:
Begin try
Insert into Imports (datacol1, datacol2, ExcelFileLocation, DateImported) values
Select datacol1, datacol2, (location of file), getdate()
From TableExcel1
End try
-- if logic breaks unique constraint put it into second table
Begin Catch
Insert into ImportDupes (datacol1, datacol2, ExcelFileLocation, DateImported) values
Select datacol1, datacol2, (location of file), getdate()
From TableExcel1
End Catch
-- repeat above for EACH excel table
-- clean up the individual staging tables for the next import cycle for EACH excel table
truncate TableExcel1
d. Automate the procedure to go off
You now have two tables, one for successful imports and one for duplicates.
The reason I did what I did is two fold:
You need to know more detail than just the detail a lot of times like when it came in, from what source it came from, was it a duplicate, if you do this for millions of rows can it be indexed easily?
This model is easier to take apart and automate. It may be more work to set up but if a piece breaks you can see where and easily stop the import for one location by turning off the code in a section.

Generate multiple Visio files from a database

I have an Access database containing information about multiple clients.
I can put these datas into one table (1 row = 1 client).
I want to create vsd files from this table. I want to draw a Template and associate forms with fields.
I'm new in Visio, but I succeed in mapping a table to a draw, but I don't know how to ask Visio to generate a file per row. I'll appreciate any help.
I don't think it's possible, so I wrote a PowerShell script to do the job.
For each row, it open and edit the Template and save it Under a new name.

Gather inserted text values from multiple rows in a report

I'm currently struggling with Oracle Apex.
I'm trying to create an application which enables customers to place their order. Therefore I create a report which lists the available products. Furthermore the report contains a column (the SQL query for that is simply '0' as "Quantity") which displays a text box. In this text boy the customer should be able to insert the required quantity.
I've create a screenshot to make it easier to follow me:
After the customer has fill out the form, the "Place Order" button will purchase the wished items then.
My question is now, how is it possible to read out in which text boxes did the user filled in a number and also to which product belongs it!
An easier solution would be to recreate the region but choose Form Region and then Tabular Form Region and then the wizard will help take care of the DML for you. But you need to use specific table columns for this to work.
To answer your question more directly - the input items defined in reports that are posted to the server can be accessed in PL/SQL as a set of "Global Arrays". These are defined as PL/SQL tables in the package apex_application with the names g_f01 through g_f50.
To be sure which of these arrays to use for the quantity text box you can look at the html of the page for the name attribute of the input tag. If it is f01 then you would be able to process the results by accessing each position or element in apex_application.g_f01.
To link the input with the table you would need some sort of key. If you use the wizard to build a Tabular Form all this headache is taken care of for you though.

Resources