I want to iterate the same code for different macro sets like in SAS and then append all tables populated together. As I am coming from sas background, I am quite confused about how to do this in Pyspark environment. Any help is much appreciated!
Example code is below :
STEP1: define macro variables
lastyear_st=201615
lastyear_end=201622
thisyear_st=201715
thisyear_end=201722
STEP2: loop the code through various macro variables
customer_spend=sqlContext.sql("""
select a.customer_code,
sum(case when a.week_id between %d and %d then a.spend else 0 end) as spend
from tableA
group by a.card_code
"""
%(lastyear_st,lastyear_end)
(thisyear_st,thisyear_end))
STEP3: append each of the dataset populated above to the base table
# macroVars are your start and end values arranged as list of list.
# where each innner list contains start and end value
macroVars = [[201615,201622],[201715, 201722]]
# loop thru list of list ==>
for start,end in macroVars:
# prepare query using the values of start and end
query = "SELECT a.customer_code,Sum(CASE\
WHEN a.week_id BETWEEN {} AND {} \
THEN a.spend \
ELSE 0 END) \
AS spend FROM tablea GROUP BY a.card_code".format(start,end)
# execute query
customer_spend = sqlContext.sql(query)
# depending on your base table setup use appropriate write command for example
customer_spend\
.write.mode('append')\
.parquet(os.path.join(tempfile.mkdtemp(), 'data'))
Related
I’m very new to python and am trying really hard these last few days on how to go through a df row by row, and check each row that has a difference between columns dQ and dCQ. I just said != 0 since there could be a pos or neg value. Now if this is true, I would like to check in another table whether certain criteria are met. I'm used to working in R, where I could store the df into a variable and call upon the column name, I can't seem to find a way to do it in python. I posted all of the code I’ve been playing with. I know this is messy, but any help would be appreciated. Thank you!
I've tried installing different packages that wouldn't work, I tried making a for loop (I failed miserably), maybe a function? I’m not sure where to even look. I've never learned Python, I’m really doing my best watching videos online and reading on here.
import pyodbc
import PyMySQL
import pandas as pd
import numpy as np
conn = pyodbc.connect("Driver={ODBC Driver 17 for SQL Server};"
"Server=***-***-***.****.***.com;"
"Database=****;"
"Trusted_Connection=no;"
"UID=***;"
"PWD=***")
# cur = conn.cursor()
# cur.execute("SELECT TOP 1000 tr.dQ, po.dCQ,
tr.dQ - po.dCQ as diff FROM [IP].[dbo].
[vT] tr (nolock) JOIN [IP].[dbo].[vP] po
ON tr.vchAN = po.vchCustAN WHERE tr.dQ
!= po.dCQ")
# query = cur.fetchall()
query = "SELECT TOP 100 tr.dQ, po.dCQ/*, tr.dQ -
po.dCQ as diff */FROM [IP].[dbo].[vT]
tr (nolock) INNER JOIN [IP].[dbo].[vP] po ON
tr.vchAN = po.vchCustAN WHERE tr.dQ !=
po.dCQ"
df = pd.read_sql(query, conn)
#print(df[2,])
cursor = conn.cursor(PyMySQL.cursors.DictCursor)
cursor.execute("SELECT TOP 100 tr.dQ, po.dCQ/*,
tr.dQ - po.dCQ as diff */FROM [IP].[dbo].
[vT] tr (nolock) INNER JOIN [IP].[dbo].
[vP] po ON tr.vchAN = po.vchCustAN
WHERE tr.dQ != po.dCQ")
result_set = cursor.fetchall()
for row in result_set:
print("%s, %s" % (row["name"], row["category"]))
# if df[3] != 0:
# diff = df[1]-df[2]
# print(diff)
# else:
# exit
# cursor = conn.cursor()
# for row in cursor.fetchall():
# print(row)
#
# for record in df:
# if record[1] != record[2]:
# print(record[3])
# else:
# record[3] = record[1]
# print(record)
# df['diff'] = np.where(df['dQ'] != df["dCQ"])
I expect some sort of notification that there's a difference in row xx, and now it will check in table vP to verify we received this data's details. I believe i can get to this point, if i can get the first part working. Any help is appreciated. I'm sorry if this question is not clear, i will do my best to answer any questions someone may have. Thank you!
One solution could be to make a new column where you store the result of the diff between df[1] and df[2]. One note first. It might be more precise to either name your columns when you make the df, then reference them with df['name1'] and df['name2'], or use df.iloc[:,1] and df.iloc[:,2]. Also note that column numbers start with zero, so these would refer to the second and third columns in the df. The reason to use iloc is and the colons is to explicitly state that you want all rows and and column numbers 1 and 2. Otherwise, with df[1] or df[2] if your df was transposed that may actually refer to what you think of as the index. Now, on to a solution.
You could try
df['diff']=df.iloc[:,1]-df.iloc[:,2]
df['diff_bool']=np.where(df['diff']==0,False, True)
or you could combine this into one method
df['diff_bool']==np.where(df.iloc[:,1]-df.iloc[:,2]==0,False, True)
This will create a column in your df that says if there is a difference between columns one and two. You don't actually need to loop through row by row because pandas functions work like matrix math, so df.iloc[:,1]-df.iloc[:,2] will apply the subtraction row by row automatically.
I have a database that I am reading with sqlite3 in Python 2.7, using the following command:
# Change to database directory
os.chdir(data)
# Find database file
cur_db = glob.glob('*.db')
# Connect to database
con = sqlite3.connect('database.db')
c = con.cursor()
# Query database
print(len(available_table))
for row in c.execute('SELECT * FROM col1 '):
print row
which gives me something like:
(1, u'2.3', u'brown', u'0', u'hairy', u'banana', u'2', u'monkey')
I would like to look at values in the column w/ the value u'2.3' greater than 2. But this is a unicode string instead of a number, making it difficult to compare to a number (eg 2).
Ideally, I would like something like:
# Connect to database
con = sqlite3.connect('database.db')
c = con.cursor()
# Query database
c.execute('SELECT * FROM critter WHERE weight > 2'.
QUESTION: How can I add a conditional statement to extract only data rows where this element is greater than 2? I would like to leave the database unaltered.
Edit: oh you want a query to do that... You can do:
for row in c.execute('select * from critter where cast(weight as numeric) > 2'):
# do something
Older answer:
If you want to do this on the Python side you can use a try-except construct like so:
for row in c.execute('SELECT * FROM col1'):
try:
val = float(row[1]) # number here is the number of the element in the tuple
if val > 2:
print(val)
except ValueError:
pass
I'm just learning SPSS and I want to do simple subgroup analysis based on a variable "status" I created which can take values from 0 to 8. I would like to print outputs in one go.
this is the pseudocode for what I want to do:
for( i = 1, i = 8, i++)
{
filter by (ststus = i)
display analysis
remove filter
}
That way I can do it all in one go but also i can add to the analysis code and do something easily for the 8 subgroups.
I don't know if it's relevant but here is the code I want to iterate over currently:
USE ALL.
COMPUTE filter_$=(Workforce EQ 1 AND SurveySample = 1 AND State = 1).
VARIABLE LABELS filter_$ 'Workforce EQ 1 (FILTER)'.
> VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'. FORMATS filter_$
> (f1.0). FILTER BY filter_$. EXECUTE.
>
>
> FREQUENCIES VARIABLES = Q86 Q33 Q34 Q88 FSEScore /BARCHART FREQ
> /ORDER=ANALYSIS.
>
> CROSSTABS /TABLES=FSEScore BY Q86 /FORMAT=AVALUE TABLES
> /CELLS=ROW /COUNT ROUND CELL.
>
> FILTER OFF. USE ALL.
Thanks guys.
split file command may solve the problem - it causes your analysis reports to show results for each category of your split variable separately:
*run your transformations.
sort cases by status.
split file by status.
FREQUENCIES .....
CROSSTABS ....
split file off.
If this is not enough, you can use a macro to run through "status" categories:
first define the macro:
define MyMacro ()
!do !ST=1 !to 8
* filter commands using **status = !ST**
* transformations using **status = !ST**
FREQUENCIES .....
CROSSTABS ....
!doend
!enddefine.
now call your macro:
MyMacro .
this is probably a very getto way of doing this, the suggestion above is probably more sensible.
You can initialise Python is spss. The following code works:
begin program.
import spss
for i in xrange(1,8):
string = str(i)
spss.Submit("""
USE ALL.
COMPUTE filter_$=(Workforce EQ 1 AND SurveySample = 1 AND Status = %s).
VARIABLE LABELS filter_$ 'Workforce EQ 1 (FILTER)'.
VALUE LABELS filter_$ 0 'Not Selected' 1 'Selected'.
FORMATS filter_$ (f1.0).
FILTER BY filter_$.
EXECUTE.
#analysis as required
FREQUENCIES VARIABLES = Q86
/BARCHART FREQ
/ORDER=ANALYSIS.
"""%(' '.join(string)) )
end program.
Many thanks to eli-k I probably should have just used splitfile.
I am a SAS novice. I am trying to convert character variables to numeric. The code below works for one variable, but I need to convert more than 50 variables, hopefully simultaneously. Would an array solve this problem? If so, how would I write the syntax?
DATA conversion_subset;
SET have;
new_var = input(oldvar,4.);
drop oldvar;
rename newvar=oldvar;
RUN;
#Reeza
DATA conversion_subset;
SET have;
Array old_var(*) $ a_20040102--a_20040303 a_302000--a_302202;
* The first list contains 8 variables. The second list contains 7 variables;
Array new_var(15) var1-var15;
Do i=1 to dim(old_var);
new_var(i) = input(old_var(i),4.);
End;
*drop a_20040102--a_20040303 a_302000--a_302202;
*rename var1-var15 = a_20040102--a_20040303 a_302000--a_302202;
RUN;
NOTE: Invalid argument to function INPUT at line 64 column 19
(new_var(i) = input(old_var(i),4.)
#Reeza
I am still stuck on this array. Your help would be greatly appreciated. My code:
DATA conversion_subset (DROP= _20040101 _20040201 _20040301);
SET replace_nulls;
Array _char(*) $ _200100--_601600;
Array _num(*) var1-var90;
Do i=1 to dim(_char);
_num(i) = input(_char(i),4.);
End;
RUN;
I am receiving the following error: ERROR: Array subscript out of range at line 64 column 6. Line 64 contains the input statement.
Yes, an array solves this issue. You will want a simple way to list the variables so look into SAS variable lists as well. For example if your converting all character variables between first and last you could list them as first_var-character-last_var.
The rename/drop are illustrated in other questions across SO.
DATA conversion_subset;
SET have;
Array old_var(50) $ first-character-last;
Array new_var(50) var1-var50;
Do i=1 to 50;
new_var(i) = input(oldvar(i),4.);
End;
RUN;
As #Parfait suggests, it would be best to adjust it when you are getting it, rather than after it is already in a SAS data set. However, if you're given the data set and have to convert that, that's what you have to do. You can add a WHERE clause to the PROC SQL to exclude variables that should not be converted. If you do so, they won't be in the final data set unless you add them in the CREATE TABLE's SELECT clause.
PROC CONTENTS DATA=have OUT=havelist NOPRINT ;
RUN ; %* get variable names ;
PROC SQL ;
SELECT 'INPUT(' || name || ',4.) AS ' || name
INTO :convert SEPARATED BY ','
FROM havelist
; %* create the select statement ;
CREATE TABLE conversion_subset AS
SELECT &convert
FROM have
;
QUIT ;
If excluding variables is an issue and/or you want to use a DATA step, then use the PROC CONTENTS above and follow with:
PROC SQL ;
SELECT COMPRESS(name || '_n=INPUT(' || name || ',4.)'),
COMPRESS(name || '_n=' || name),
COMPRESS(name)
INTO :convertlst SEPARATED BY ';',
:renamelst SEPARATED BY ' ',
:droplst SEPARATED BY ' '
FROM havelist
;
QUIT ;
DATA conversion_subset (RENAME=(&renamelst)) ;
SET have ;
&convertlst ;
DROP &droplst ;
RUN ;
Again, add a where clause to exclude variables that should not be converted. This will automatically preserve any variables that you exclude from conversion with a WHERE in the PROC SQL SELECT.
If you have too many variables, or their names are very long, or adding _n to the end causes a name collision, things can go badly (too much data for a macro variable, illegal field name, one field overwriting another, respectively).
My source data contains 200,000+ observations, one of the many variables in the data set is "county." My goal is to write a macro that will take this one data set as an input, and split them into 58 different temporary data sets for each of the California counties.
First question is if it is possible to specify the 58 counties on the data statement using something like a global reference array defined beforehand.
Second question is, assuming the output data sets have been properly specified on the data statement, is it possible to use a do loop to choose the right data set to write to?
I can get the comparison to work properly, but cannot seem to use a array reference to specify a output data set. This is most likely because I need more experience with the macro environment!
Please see below for the simplistic skeleton framework I have written so far. c_long array contains the names of each of the counties, c_short array contains a 3 letter abbreviation for each of the counties. Thanks in advance!
data splitraw;
length county_name $15;
infile "&path/random.csv" dsd firstobs=2;
input county_name $ number;
run;
%macro _58countysplit(dxtosplit,countycol);
data <need to specify 58 data sets here named something like &dxtosplit_ALA, &dxtosplit_ALP, etc..>;
set &dxtosplit;
do i=1 to 58;
if c_long{i}=&countycol then output &dxtosplit._&c_short{i};
end;
run;
%mend _58countysplit;
%_58countysplit(splitraw,county_name);
The code you provided will need to run through the large dataset 58 times, each time writing a small one. I have done it a bit different.
First I create a sample dataset with a variable "county" this will contain ten different values:
data large;
attrib county length=$12;
do i=1 to 10000;
county=put(mod(i,10)+1,ROMAN.);
output;
end;
run;
First, I start with finding all the unique values and constructing the names of all the different tables I would like to create:
proc sql noprint;
select distinct compbl("large_"!!county) into :counties separated by " "
from large;
quit;
Now I have a macrovariable "counties" that containes all the different datasets I want to create.
Here I am writing the IF-statements to a file:
filename x temp;
data _null_;
attrib county length=$12 ds length=$18;
file x;
i=1;
do while(scan("&counties",i," ") ne "");
ds=scan("&counties",i," ");
county=scan(ds,-1,"_");
put "if county=""" county +(-1) """ then output " ds ";";
i+1;
end;
run;
Now I have what I need to create the small datasets:
data &counties;
set large;
%inc x;
run;
I agree with user667489, there is almost always a better way then splitting one large data set into many small data sets. However, if you want to proceed along these lines there is a table in sashelp called vcolumn which lists all your libraries, their tables, and each column (in each table) that should help you. Also if you want
if c_long{i}=&countycol then output &dxtosplit._&c_short{i};
to resolve you might mean:
if c_long{i}=&countycol then output &&dxtosplit._&c_short{i};
It's likely, depending upon what you're actually trying to do, that BY processing is all you need. Nevertheless, here is a simple solution:
%macro split_by(data=, splitvar=);
%local dslist iflist;
proc sql noprint;
select distinct cats("&splitvar._", &splitvar)
into :dslist separated by ' '
from &data;
select distinct
catt("if &splitvar='", &splitvar, "' then output &splitvar._", &splitvar, ";", '0A'x)
into :iflist separated by "else "
from &data;
quit;
data &dslist;
set &data;
&iflist
run;
%mend split_by;
Here is some test data to illustrate:
options mprint;
data test;
length county $1 val $1;
input county val;
infile cards;
datalines;
A 2
B 3
A 5
C 8
C 9
D 10
run;
%split_by(data=test, splitvar=county)
And you can view the log to see how the macro generates the DATA step you want:
MPRINT(SPLIT_BY): proc sql noprint;
MPRINT(SPLIT_BY): select distinct cats("county_", county) into :dslist separated by ' ' from test;
MPRINT(SPLIT_BY): select distinct catt("if county='", county, "' then output county_", county, ";", '0A'x) into :iflist separated
by "else " from test;
MPRINT(SPLIT_BY): quit;
NOTE: PROCEDURE SQL used (Total process time):
real time 0.01 seconds
cpu time 0.01 seconds
MPRINT(SPLIT_BY): data county_A county_B county_C county_D;
MPRINT(SPLIT_BY): set test;
MPRINT(SPLIT_BY): if county='A' then output county_A;
MPRINT(SPLIT_BY): else if county='B' then output county_B;
MPRINT(SPLIT_BY): else if county='C' then output county_C;
MPRINT(SPLIT_BY): else if county='D' then output county_D;
MPRINT(SPLIT_BY): run;
NOTE: There were 6 observations read from the data set WORK.TEST.
NOTE: The data set WORK.COUNTY_A has 2 observations and 2 variables.
NOTE: The data set WORK.COUNTY_B has 1 observations and 2 variables.
NOTE: The data set WORK.COUNTY_C has 2 observations and 2 variables.
NOTE: The data set WORK.COUNTY_D has 1 observations and 2 variables.
NOTE: DATA statement used (Total process time):
real time 0.03 seconds
cpu time 0.05 seconds