Using VB6 to update info in database - sql-server

I'm being taught VB6 by a co-worker who gives me assignments every week. I think this time he's overestimated my skills. I'm supposed to find a line in a text file that contains Brand IDs and their respective brand name. Once I find the line, I'm to split it into variables and use that info to create a program that, via an inserted SQL statement, finds the brand, and replaces the "BrandName" in the item description with the "NewBrandname".
Here's what I'm working with
Dim ff as integer
ff = freefile
Open "\\tsclient\c\temp\BrandNames.txt" For Input as #ff
Do Until EOF(ff)
Dim fileline as string,linefields() as string
line input #ff, fileline
linefields = split(fileline,",")
brandID = linefields(0)
BrandName = linefields(1)
NewBrandName = linefields(2)
I want to use the following line in the text file, since It's the brand I'm working with:
BrandID =CHEFJ, BrandName=Chef Jay's NewBrandName=Chef Jays
That's what 'fileline' is- just don't know how to select just that one line
As for updating the info, here's what I've got:
dim rs as ADODB.Recordset, newDesc1 as String
rs = hgSelect("select desc1 from thprod where brandID='CHEFJ'")
do while not rs.eof
if left(rs!desc1,len(BrandName)) = BrandName then
dim newDesc1 as string
newDesc1 = NewBrandname & mid(rs!desc1, len(BrandName)+1)
hgExec "update thprod set desc1=" & adoquote(NewBrandName) & "+right(desc1,len(BrandName))" where brandId=CHEFJ and desc1 like 'BrandName%'"
end if
rs.movenext
loop
end while
How do I put this all together?

Just to give you some guidelines;
Firstly you need to read the Text file, which you are already doing.
Then, once you get the data, you need to spot the format and SPLIT the data to retrieve only the parts you need.
For example, if the data read from textfile gives you BrandID=CHEFJ, BrandName=Chef Jay's, NewBrandName=Chef Jays, you will see that the data are delimited by commas ,, and the property values are preceded by equal signs.
Follow LINK for more info of how to split.
Once you've split the data, you can easily use them to proceed with your database update. To update your db, first of all you will need to create the connection. Then your query to update using the data you've fetched from the Text file.
Finally you need to execute your query using ADODB. This EXAMPLE can help.
Do not forget to dispose the objects used, including your connection.
Hope it helps.

Related

Excel VBA: Create Array from Filter Field Items?

A report I am creating in Excel involves several very similar pivot tables needing to be specifically filtered many times (i.e. a Year-to-Date table, a Quarter-to-Date table, etc, all needing to be filtered the exact same way before exported, then filtered again, then exported, etc)
So I looked into VBA as a way of accepting a few filter criteria, then filtering multiple tables that way, before looping.
However, I'm having a very tough time properly targeting PivotTables and specific fields, as it appears an integrated Value field is targeted and filtered via code differently than, say, a "filter' field I have attached to the top of the PivotTables, where they can accept no "begins with", "contains", etc, strings. They are just checkboxes, and one or multiple can be selected.
So it's one thing for me to tell it via VBA to select one item, and having it select all but one item. The latter requires the code to target every single possible value, but not the one that I want excluded.
My idea for this, then, is to create an array from every possible existing value in this filter field, then going through a loop where each value is added to my code as a value to check.
I have some code so far:
ActiveSheet.PivotTables("QTD_Pivot_By_Category").PivotFields( _
"[Range].[Address_1].[Address_1]").VisibleItemsList = Array( _
"[Range].[Address_1].&", "[Range].[Address_1].&[0]", "[Range].[Address_1].&[101]" _
, "[Range].[Address_1].&[INC]", "[Range].[Address_1].&[KRT]", _
"[Range].[Address_1].&[LTD]", "[Range].[Address_1].&[RPO]", _
"[Range].[Address_1].&[ INC]", "[Range].[Address_1].&[CORP]", _
"[Range].[Address_1].&[INC.]", "[Range].[Address_1].&[LTD.]", _
"[Range].[Address_1].&[LTEE]", "[Range].[Address_1].&[PAWS]", _
Now, if I just record this macro from actions in Excel, and do "select All", then de-select the one I don't want, it will error. It errors because it's selecting ~300 values, and while it's 'writing' this code, it errors when it hits the limit of "_" delimited breaks in one straight line of VBA code.
If my field is called "Address_1" as above, part of the range..."Range" (not sure where that's defined or why, but it works), can I get some help as to the most efficient way to define said ".VisibleItemList" as all POSSIBLE items in the list from a dynamic array rather than needing to be selected manually? This list will be different day-to-day so it can't just be a hardcoded flat list.
Ideally, also in a way that circumvents the max limit on "_" line breaks in a line of code in VBA for Excel.
If it's of any use for context, my table looks like this. See that checkbox drop-down? I want a snapshot of every updated value sitting in there to be put into an array and then iterated upon being added in a way similar to my example code:
Edit:
Since that filter field's values are being pulled from a local datasource, I decided to just grab those and make an array that way! So I'm starting my code this way:
Dim OGDataRange As Range, OGDataLastRow As Long
Dim ValueArray As Variant
OGDataLastRow = Worksheets("DATA QTD").Range("U2").End(xlDown).Row
Set OGDataRange = Worksheets("DATA QTD").Range("U2:U" & OGDataLastRow)
ValueArray = OGDataRange.Value
"ValueArray" is now my array. So I need help one-by-one pulling the values of this array, and adding them to my VisibleItemList as seen above.
Thank you so much for any assistance.
This might help you
Private Sub this()
Dim pf As PivotField
Dim pi As PivotItem
Dim strPVField As String
strPVField = "this"
Set pt = ActiveSheet.PivotTables("PivotTable1")
Set pf = pt.PivotFields(strPVField)
Application.ScreenUpdating = False
Application.DisplayAlerts = False
On Error Resume Next
pf.AutoSort xlManual, pf.SourceName
For Each pi In pf.PivotItems
pi.Visible = False
Next pi
pf.AutoSort xlAscending, pf.SourceName
Application.DisplayAlerts = True
Application.ScreenUpdating = True
End Sub
borrowed from
Deselect all items in a pivot table using vba

Load CSV to a database when columns can be added/removed by the vendor

I've got some SSIS packages that take CSV files that come from the vendor and puts them into our local database. The problem I'm having is that sometimes the vendor adds or removes columns and we don't have time to update our packages before our next run, which causes the SSIS packages to abend. I want to somehow prevent this from happening.
I've tried reading in the CSV files line by line, stripping out new columns, and then using an insert statement to put the altered line into the table, but that takes far longer than our current process (the CSV files can have thousands or hundreds of thousands of records).
I've started looking into using ADO connections, but my local machine has neither the ACE nor JET providers and I think the server the package gets deployed to also lacks those providers (and I doubt I can get them installed on the deployment server).
I'm at a loss as to what I can do to be able to load tables and be able to ignore newly added or removed columns (although if a CSV file is lacking a column the table has, that's not a big deal) that's fast and reliable. Any ideas?
I went with a different approach, which seems to be working (after I worked out some kinks). What I did was take the CSV file rows and put them into a temporary datatable. When that was done, I did a bulk copy from the datatable to my database. In order to deal with missing or new columns, I determined what columns were common to both the CSV and the table and only processed those common columns (new columns were noted in the log file so they can be added later). Here's my BulkCopy module:
Private Sub BulkCopy(csvFile As String)
Dim i As Integer
Dim rowCount As Int32 = 0
Dim colCount As Int32 = 0
Dim writeThis As ArrayList = New ArrayList
tempTable = New DataTable()
Try
'1) Set up the columns in the temporary data table, using commonColumns
For i = 0 To commonColumns.Count - 1
tempTable.Columns.Add(New DataColumn(commonColumns(i).ToString))
tempTable.Columns(i).DataType = GetDataType(commonColumns(i).ToString)
Next
'2) Start adding data from the csv file to the temporary data table
While Not csvReader.EndOfData
currentRow = csvReader.ReadFields() 'Read the next row of the csv file
rowCount += 1
writeThis.Clear()
For index = 0 To UBound(currentRow)
If commonColumns.Contains(csvColumns(index)) Then
Dim location As Integer = tableColumns.IndexOf(csvColumns(index))
Dim columnType As String = tableColumnTypes(location).ToString
If currentRow(index).Length = 0 Then
writeThis.Add(DBNull.Value)
Else
writeThis.Add(currentRow(index))
End If
'End Select
End If
Next
Dim row As DataRow = tempTable.NewRow()
row.ItemArray = writeThis.ToArray
tempTable.Rows.Add(row)
End While
csvReader.Close()
'3) Bulk copy the temporary data table to the database table.
Using copy As New SqlBulkCopy(dbConnection)
'3.1) Set up the column mappings
For i = 0 To commonColumns.Count - 1
copy.ColumnMappings.Add(commonColumns(i).ToString, commonColumns(i).ToString)
Next
'3.2) Set the destination table name
copy.DestinationTableName = tableName
'3.3) Copy the temporary data table to the database table
copy.WriteToServer(tempTable)
End Using
Catch ex As Exception
message = "*****ERROR*****" + vbNewLine
message += "BulkCopy: Encountered an exception of type " + ex.GetType.ToString()
message += ": " + ex.Message + vbNewLine + "***************" + vbNewLine
LogThis(message)
End Try
End Sub
There may be something more elegant out there, but this so far seems to work.
Look into BiML, which build and executes your SSIS Package dynamically based on the meta-data at run time.
Based on this comment:
I've tried reading in the CSV files line by line, stripping out new
columns, and then using an insert statement to put the altered line
into the table, but that takes far longer than our current process
(the CSV files can have thousands or hundreds of thousands of
records).
And this:
I used a csvreader to read the file. The insert was via a sqlcommand
object.
It would appear at first glance that the bottleneck is not in the flat file source, but in the destination. An OLEDB Command executes in a row by row fashion, one statement per input row. By changing this to an OLEDB destination, it will convert the process to a bulk insert operation. To test this out, just use the flat file source and connect it to a derived column. Run that and check the speed. If it's faster, change to the oledb destination and try again. It also helps to be inserting into a heap (no clustered or nonclustered indexes) and use tablock.
However, this does not solve your whole varied file problem. I don't know what the flat file source does if you are short a column or more from how you originally configured it at design time. It might fail, or it might import the rows in some jagged form where part of the next row is assigned to the last columns in the current row. That could be a big mess.
However, I do know what happens, when a flat file source gets extra columns. I put in this connect item for it which was sadly rejected: https://connect.microsoft.com/SQLServer/feedback/details/963631/ssis-parses-flat-files-incorrectly-when-the-source-file-contains-unexpected-extra-columns
What happens is that the extra columns are concatenated into the last column. If you plan for it, you could make the last column large and then parse in SQL from the staging table. Also, you could just jam the whole row into SQL and parse each column from there. That's a bit clunky though because you'll have a lot of CHARINDEX() checking the position of values all of the place.
An easier option might be to parse it in .Net in a script task using some combo of split() to get all the values and check the count of values in the array to know how many columns you have. This would also allow you to direct the rows to different buffers based on what you find.
And lastly, you could ask the vendor to commit to a format. Either a fixed number of columns or use a format that handles variation like XML.
I've got a C# solution (I haven't checked it, but I think it works) for a source script component.
It will read the header into an array using split.
And then for each data row use the same split function and use the header value to check the column and use rowval to set the output.
You will need to put all the output columns in to the output area.
All columns that are not present will have a null value on exit.
public override void CreateNewOutputRows()
{
using (System.IO.StreamReader sr = new System.IO.StreamReader(#"[filepath and name]"))
{
while (!sr.EndOfStream)
{
string FullText = sr.ReadToEnd().ToString();
string[] rows = FullText.Split('\n');
//Get header values
string[] header = rows[0].Split(',');
for (int i = 1; i < rows.Length - 1; i++)
{
string[] rowVals = rows[i].Split(',');
for (int j = 0; j < rowVals.Length - 1; j++)
{
Output0Buffer.AddRow();
//Deal with each known header name
switch (header[j])
{
case "Field 1 Name": //this is where you use known column names
Output0Buffer.FieldOneName = rowVals[j]; //Cast if not string
break;
case "Field 2 Name":
Output0Buffer.FieldTwoName = rowVals[j]; //Cast if not string
break;
//continue this pattern for all column names
}
}
}
}
}
}

Text File to Array in Access VBA

I am looking to load a .txt file into a VBA array (in an access VBA), manipulate it there, and then paste the manipulated array into an access table. I will loop this macro then through round about 500 .txt files and populate the database with years of data.
I have done it before using Excel in this way:Populating the sheet for 1 .txt file, manipulating the data, loading into the database, clearing the sheet, and loading the next txt file and loop till all files have been processed. However, this takes years and it becomes obvious that Excel is not really needed, since everything is stored in Access anyway.
Since then, I have tried to figure out how to do it in access straight away, without using excel, but I have been struggling. I found some nice code on this board here:
Load csv file into a VBA array rather than Excel Sheet
and tried to amend it so it would work for me. Especially The_Barman's Answer at the bottom seems simple and very interesting.
I am new to arrays, VBA, SQL, basically everything, so maybe there is some big mistake I am making here that is easy to resolve. See my code below:
Sub LoadCSVtoArray()
Dim strfile As String
Dim strcon As String
Dim cn As ADODB.Connection
Dim rs As Recordset
Dim rsARR() As Variant
Set cn = New ADODB.Connection
strcon = "Provider=Microsoft.JET.OLEDB.4.0;Data Source=" & "\\filename.txt" &
";Extended Properties=""text;HDR=Yes;FMT=Delimited"";"
cn.Open strcon
strSQL = "SELECT * filename.txt;"
Set rs = cn.Execute(strSQL)
rsARR = WorksheetFunction.Transpose(rs.GetRows)
rs.Close
Set cn = Nothing
[a1].Resize(UBound(rsARR), UBound(Application.Transpose(rsARR))) = rsARR
End Sub
I dont even know if the bottom part of the code works, because an error message pops up that the file is not found. The interesting thing is, if I debug and copy the value of strcon into windows "run", it opens the correct file. So I guess the path is correct? Can I even open a .txt file through an ADODB connection? Right now I am a bit confused if this is possible and if it is the best solution.
Some more background regarding the text files I am trying to save into the array:
-They are output from another program, so it is always the same structure and very oreganized
it comes in this format:
Column a Column b ....
data 1 data 1
data 2 Data 2
...
and so on.
If possible, I would like to retain this structure, and even safe it as a table with the first row as column headers.
The Data Source is the folder path containing the file and the file is the table (SELECT * FROM ..). Replace "\\filename.txt" in strcon with the folder path.
http://www.connectionstrings.com/textfile/

Access query not using index

I have a table in Access with 1 field called HostName, it is a text field, with 100 char max. I use it to store DNS host names. The field is setup as the primary key. If I do the following query it returns the expected results, but takes about 8 seconds to complete on a table with 1 million records:
SELECT TOP 1 HostsRev.HostName
FROM HostsRev
WHERE (((HostsRev.HostName)>="test"))
ORDER BY HostsRev.HostName;
If I remove the "ORDER BY" part, it returns in less than 1 second, but doesn't always return what I would expect -- not the first record that is >= to "test".
I am doing the query via ADO from a C++ app, but I've tested in Access also, by creating a query, and get the same results.
What I need is to quickly find the first record, if any, that starts with a given string. I also tried using LIKE query but that had the same results. I need to do this because if I search on images.google.com, I need to know if the list contains google.com but not images.google.com (I actually store the host names in reverse string order to make this work correctly, and reverse the strings before doing the lookup).
The issue is that the TOP command on it's own does not apply sorting to the data, so without the ORDER BY it will return in a different order and thus give different results, you could try the following instead:
SELECT Min(HostName) FROM HostsRev WHERE HostName >= "test"
Not sure if this will give any better performance though but worth a go : )
I am not sure if you can do this from C++, not being a C++ programmer, but ADO supports a property .index to allow you to set the index you wish to use and a .seek method to search on that index. here is some code in VB for what it is worth.
Dim conn As ADODB.Connection
Dim rs As ADODB.Recordset
Set conn = New ADODB.Connection
conn.Open ConnectionString
rs.Open "mytable", conn
rs.Index = "primarykey"
rs.Seek "test", adSeekAfterEQ
If rs.EOF Then ' record not found

How to split the data in this file vb6

I have this file. It stores a names, a project, the week that they are storing the data for and hours spent on project. here is an example
"James","Project5","15/05/2010","3"
"Matt","Project1","01/05/2010","5"
"Ellie","Project5","24/04/2010","1"
"Ellie","Project2","10/05/2010","3"
"Matt","Project3","03/05/2010","4"
I need to print it on the form without quotes. There it should only show the name once and then just display projects under the name. I've looked tihs up and the split function seems interesting
any help would be good.
Create a Dictionary object and then put everything you find for a given name into one dictionary entry.
Then in a second iteration print all that out.
Microsoft has a CSV ADO provider. I think it is installed along with the rest of ADO. This is exactly the format it was designed to read.
See http://www.vb-helper.com/howto_ado_load_csv.html for a VB sample.
Do I understand you correctly in that you want to keep track of the names entered and thus re-order the data that way? Why not just read the data into a list of some new type that has the name, project, and other information and then sort that before printing it?
While the Dictionary solution is simpler, this may be a better solution if you are OK with building a class and implementing the IComparer so that you could sort the list to get this done pretty easily.
You could read each line, strip out the quotes, split on the comma, then process the array of data you would be left with:
Dim filenum As Integer
Dim inputLine As String
Dim data() As String
filenum = FreeFile
Open "U:\test.txt" For Input As #filenum
Do While Not EOF(filenum)
Line Input #filenum, inputLine
inputLine = Replace(inputLine, Chr(34), vbNullString)
data = Split(inputLine, ",")
Debug.Print data(0), data(1), data(2), data(3)
Loop
Close #filenum
Or you could have the Input command strip the quotes, and read the data into variables:
Dim filenum As Integer
Dim name As String, project As String, dat As String, hours As String
filenum = FreeFile
Open "U:\test.txt" For Input As #filenum
Do While Not EOF(filenum)
Input #filenum, name, project, dat, hours
Debug.Print name, project, dat, hours
Loop
Close #filenum

Resources