Unable to add items to Roblox Table - arrays

I am having difficulty troubleshooting some code.
I have a for loop and in it I clone a part (called EnemySiteHub).
I expect that I can store each cloned part to a table (called EnemySiteTable).
Unfortunately, even though the loop runs successfully and I actually see the cloned EnemySiteHubs during a run of the game.. The table is size remains 0.
Trying to access the table in code gives a nil error.
Code snip:
local ENEMYSITE_COUNT = 5
local EnemySiteTable = {} -- [[ Store the table of enemy site objects ]]
-- Loops until there are the amount of enemy site hubs set in ENEMYSITE_COUNT
for i = 1, ENEMYSITE_COUNT do
--Makes a copy of EnemySiteHub
local enemySite = ServerStorage.EnemySites.EnemySiteHub:Clone()
enemySite.Parent = workspace.EnemySites
EnemySiteTable[i] = enemySite
This line of code causes causes the error below.
local enemySiteTableSize = #enemySiteTable
18:12:37.984 - ServerScriptService.MoveEnemyToSite:15: attempt to get length of a nil value
Any help will be appreciated.

#array is used to retrieve the length of arrays. You will have to use some sort of table.function() or use a for i,v in pairs(EnemySiteTable) loop.
Here's some more information: https://developer.roblox.com/en-us/articles/Table

Thanks #pyknight202
The problem originated somewhere else in my code.
The EnemySiteTable is in a module script.
This code below is the correct code to give access to the EnemySiteTable
--Have the table of enemies accessible
EnemySiteManager.EnemySiteTable = EnemySiteTable
I had an error (typo) in that line of code.
The effect of that error kept returning a nil table, giving a table size of 0.

Related

NOT IN within a cypher query

I'm attempting to find all values that match any item within a list of values within cypher. Similar to a SQL query with in and not in. I also want to find all values that are not in the list in a different query. The idea is I want to assign a property to each node that is binary and indicates whether the name of the node is within the predefined list.
I've tried the following code blocks:
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE NOT temp2.Name IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp
This block returns nothing, but should return a rather large amount of data.
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE temp2.Name NOT IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp
This code block returns an error in relation to the NOT's position. Does anyone know the correct syntax for this statement? I've looked around online and in the neo4j documentation, but there are a lot of conflicting ideas with version changes. Thanks in advance!
Neo4j is case sensitive so you need to check the data to ensure that the EMAIL_DOMAIN.Name is all upper case. If Name is mixed case, you can convert the name using toUpper(). If Name is all lower case, then you need to convert the values in your query.
MATCH (temp:APP) - [] -> (temp2:EMAIL_DOMAIN)
WHERE NOT toUpper(temp2.Name) IN ['GMAIL.COM', 'YAHOO.COM', 'OUTLOOK.COM', 'ICLOUD.COM', 'LIVE.COM']
RETURN temp

Filter Array For IDs Existing in Another Array with Ruby on Rails/Mongo

I need to compare the 2 arrays declared here to return records that exist only in the filtered_apps array. I am using the contents of previous_apps array to see if an ID in the record exists in filtered_apps array. I will be outputting the results to a CSV and displaying records that exist in both arrays to the console.
My question is this: How do I get the records that only exist in filtered_apps? Easiest for me would be to put those unique records into a new array to work with on the csv.
start_date = Date.parse("2022-02-05")
end_date = Date.parse("2022-05-17")
valid_year = start_date.year
dupe_apps = []
uniq_apps = []
# Finding applications that meet my criteria:
filtered_apps = FinancialAssistance::Application.where(
:is_requesting_info_in_mail => true,
:aasm_state => "determined",
:submitted_at => {
"$exists" => true,
"$gte" => start_date,
"$lte" => end_date })
# Finding applications that I want to compare against filtered_apps
previous_apps = FinancialAssistance::Application.where(
is_requesting_info_in_mail: true,
:submitted_at => {
"$exists" => true,
"$gte" => valid_year })
# I'm using this to pull the ID that I'm using for comparison just to make the comparison lighter by only storing the family_id
previous_apps.each do |y|
previous_apps_array << y.family_id
end
# This is where I'm doing my comparison and it is not working.
filtered_apps.each do |app|
if app.family_id.in?(previous_apps_array) == false
then #non_dupe_apps << app
else "No duplicate found for application #{app.hbx_id}"
end
end
end
So what am I doing wrong in the last code section?
Let's check your original method first (I fixed the indentation to make it clearer). There's quite a few issues with it:
filtered_apps.each do |app|
if app.family_id.in?(previous_apps_array) == false
# Where is "#non_dupe_apps" declared? It isn't anywhere in your example...
# Also, "then" is not necessary unless you want a one-line if-statement
then #non_dupe_apps << app
# This doesn't do anything, it's just a string
# You need to use "p" or "puts" to output something to the console
# Note that the "else" is also only triggered when duplicates WERE found...
else "No duplicate found for application #{app.hbx_id}"
end # Extra "end" here, this will mess things up
end
end
Also, you haven't declared previous_apps_array anywhere in your example, you just start adding to it out of nowhere.
Getting the difference between 2 arrays is dead easy in Ruby: just use -!
uniq_apps = filtered_apps - previous_apps
You can also do this with ActiveRecord results, since they are just arrays of ActiveRecord objects. However, this doesn't help if you specifically need to compare results using the family_id column.
TIP: Getting the values of only a specific column/columns from your database is probably best done with the pluck or select method if you don't need to store any other data about those objects. With pluck, you only get an array of values in the result, not the full objects. select works a bit differently and returns ActiveRecord objects, but filters out everything but the selected columns. select is usually better in nested queries, since it doesn't trigger a separate query when used as a part of another query, while pluck always triggers one.
# Querying straight from the database
# This is what I would recommend, but it doesn't print the values of duplicates
uniq_apps = filtered_apps.where.not(family_id: previous_apps.select(:family_id))
I highly recommend getting really familiar with at least filter/select, and map out of the basic array methods. They make things like this way easier. The Ruby docs are a great place to learn about them and others. A very simple example of doing a similar thing to what you explained in your question with filter/select on 2 arrays would be something like this:
arr = [1, 2, 3]
full_arr = [1, 2, 3, 4, 5]
unique_numbers = full_arr.filter do |num|
if arr.include?(num)
puts "Duplicates were found for #{num}"
false
else
true
end
end
# Duplicates were found for 1
# Duplicates were found for 2
# Duplicates were found for 3
=> [4, 5]
NOTE: The OP is working with ruby 2.5.9, where filter is not yet available as an array method (it was introduced in 2.6.3). However, filter is just an alias for select, which can be found on earlier versions of Ruby, so they can be used interchangeably. Personally, I prefer using filter because, as seen above, select is already used in other methods, and filter is also the more common term in other programming languages I usually work with. Of course when both are available, it doesn't really matter which one you use, as long as you keep it consistent.
EDIT: My last answer did, in fact, not work.
Here is the code all nice and working.
It turns out the issue was that when comparing family_id from the set of records I forgot that the looped record was a part of the set, so it would return it, too. I added a check for the ID of the array to match the looped record and bob's your uncle.
I added the pass and reject arrays so I could check my work instead of downloading a csv every time. Leaving them in mostly because I'm scared to change anything else.
start_date = Date.parse(date_from)
end_date = Date.parse(date_to)
valid_year = start_date.year
date_range = (start_date)..(end_date)
comparison_apps = FinancialAssistance::Application.by_year(start_date.year).where(
aasm_state:'determined',
is_requesting_voter_registration_application_in_mail:true)
apps = FinancialAssistance::Application.where(
:is_requesting_voter_registration_application_in_mail => true,
:submitted_at => date_range).uniq{ |n| n.family_id}
#pass_array = []
#reject_array = []
apps.each do |app|
family = app.family
app_id = app.id
previous_apps = comparison_apps.where(family_id:family.id,:id.ne => app.id)
if previous_apps.count > 0
#reject_array << app
puts "\e[32mApplicant hbx id \e[31m#{app.primary_applicant.person_hbx_id}\e[32m in family ID \e[31m#{family.id}\e[32m has registered to vote in a previous application.\e[0m"
else
<csv fields here>
csv << [csv fields here]
end
end
Basically, I pulled the applications into the app variable array, then filtered them by the family_id field in each record.
I had to do this because the issue at the bottom of everything was that there were records present in app that were themselves duplicates, only submitted a few days apart. Since I went on the assumption that the initial app array would be all unique, I thought the duplicates that were included were due to the rest of the code not filtering correctly.
I then use the uniq_apps array to filter through and look for matches in uniq_apps.each do, and when it finds a duplicate, it adds it to the previous_applications array inside the loop. Since this array resets each go-round, if it ever has more than 0 records in it, the app gets called out as being submitted already. Otherwise, it goes to my csv report.
Thanks for the help on this, it really got my brain thinking in another direction that I needed to. It also helped improve the code even though the issue was at the very beginning.

Power Query M loop table / lookup via a self-join

First of all I'm new to power query, so I'm taking the first steps. But I need to try to deliver sometime at work so I can gain some breathing time to learn.
I have the following table (example):
Orig_Item Alt_Item
5.7 5.10
79.19 79.60
79.60 79.86
10.10
And I need to create a column that will loop the table and display the final Alt_Item. So the result would be the following:
Orig_Item Alt_Item Final_Item
5.7 5.10 5.10
79.19 79.60 79.86
79.60 79.86 79.86
10.10
Many thanks
Actually, this is far too complicated for a first Power Query experience.
If that's what you've got to do, then so be it, but you should be aware that you are starting with a quite difficult task.
Small detail: I would expect the last Final_Item to be 10.10. According to the example, the Final_Item will be null if Alt_Item is null. If that is not correct, well that would be a nice first step for you to adjust the code below accordingly.
You can create a new blank query, copy and paste this code in the Advanced Editor (replacing the default code) and adjust the Source to your table name.
let
Source = Table.Buffer(Table1),
AddedFinal_Item =
Table.AddColumn(
Source,
"Final_Item",
each if [Alt_Item] = null
then null
else List.Last(
List.Generate(
() => [Final_Item = [Alt_Item], Continue = true],
each [Continue],
each [Final_Item =
Table.First(
Table.SelectRows(
Source,
(x) => x[Orig_Item] = [Final_Item]),
[Alt_Item = "not found"]
)[Alt_Item],
Continue = Final_Item <> "not found"],
each [Final_Item])))
in
AddedFinal_Item
This code uses function List.Generate to perform the looping.
For performance reasons, the table should always be buffered in memory (Table.Buffer), before invoking List.Generate.
List.Generate is one of the most complex Power Query functions.
It requires 4 arguments, each of which is a function in itself.
In this case the first argument starts with () and the other 3 with each (it should be clear from the outline above: they are aligned).
Argument 1 defines the initial values: a record with fields Final_Item and Continue.
Argument 2 is the condition to continue: if an item is found.
Argument 3 is the actual transformation in each iteration: the Source table is searched (with Table.SelectRows) for an Orig_Item equal to Alt_Item. This is wrapped in Table.First, which returns the first record (if any found) and accepts a default value if nothing found, in this case a record with field Alt_Item with value "not found", From this result the value of record field [Alt_Item] is returned, which is either the value of the first record, or "not found" from the default value.
If the value is "not found", then Continue becomes false and the iterations will stop.
Argument 4 is the value that will be returned: Final_Item.
List.Generate returns a list of all values from each iteration. Only the last value is required, so List.Generate is wrapped in List.Last.
Final remark: actual looping is rarely required in Power Query and I think it should be avoided as much as possible. In this case, however, it is a feasible solution as you don't know in advance how many Alt_Items will be encountered.
An alternative for List.Generate is using a resursive function.
Also List.Accumulate is close to looping, but that has a fixed number of iterations.
This can be solved simply with a self-join, the open question is how many layers of indirection you'll be expected to support.
Assuming just one level of indirection, no duplicates on Orig_Item, the solution is:
let
Source = #"Input Table",
SelfJoin1 = Table.NestedJoin( Source, {"Alt_Item"}, Source, {"Orig_Item"}, "_tmp_" ),
Expand1 = ExpandTableColumn( SelfJoin1, "_tmp_", {"Alt_Item"}, {"_lkp_"} ),
ChkJoin1 = Table.AddColumn( Expand1, "Final_Item", each (if [_lkp_] = null then [Alt_Item] else [_lkp_]), type number)
in
ChkJoin1
This is doable with the regular UI, using Merge Queries, then Expand Column and adding a custom column.
If yo want to support more than one level of indirection, turn it into a function to be called X times. For data-driven levels of indirection, you wrap the calls in a list.generate that drop the intermediate tables in a structured column, though that's a much more advanced level of PQ.

Renaming a Dataset using a list

I'm new at programming, and I've been banging my head trying to figure this out for work.
I am trying to pull roughly 300 mysql tables into Matlab into my workspace.
I have attached the following code, which is designed to pull one table (I plan to loop this code though the 300 mysql tables when it is working).
The code successfully works to import single table into the workspace as a new dataset.
My problem arise when I try to rename this new dataset with the name of the original mysql table.
Please see code below for this part where I screw up (%Assign data to output variable)
I have a list of all the 300 tables names, and I plan to store them in a list called 'name'... Hence, name(1)... is this the right approach?
For example, the original mysql table was called 'options_20020208'.
After I run the script, I need the new dataset that Matlab imports to be called 'options_20020208' as well.
Any ideas here?
%Define Query
name = 'options_20020208'
%Set preferences with setdbprefs.
setdbprefs('DataReturnFormat', 'dataset');
setdbprefs('NullNumberRead', 'NaN');
setdbprefs('NullStringRead', 'null');
%Make connection to database.
conn = database('', 'root', 'password', 'Vendor', 'MYSQL', 'Server', 'localhost', 'PortNumber', 3306);
%Read data from database.
curs = exec(conn, [['SELECT ',name,'.UnderlyingSymbol , ']...
, [name,'.UnderlyingPrice , ']...
, [name,'.Expiration , ']...
, ['FROM ','PriceMatrix.',name,' ']...
]);
curs = fetch(curs);
close(curs);
%Assign data to output variable
name(1) = curs.Data;
%Close database connection.
close(conn);
%Clear variables
clear curs conn
If you have defined a variable name, name(1) means "the first element of variable name" (in this case just "o"). Regardless of the dimensions of the variable it returns a single value (i.e. even if X is some 5-D monstrosity, X(50) returns only the value of the 50th element). name(1) = data means "set the first element of variable name to be equal to data" and will cause an error if data is not of the right size, and either an error or unexpected behaviour if it's not the right type.
For example, try this at the command line:
name = 'options_20020208';
name(1) = 1
Now, technically what you want can be done, although I don't recommend it. If you have all the names in some sort of 300 x (length) variable then over a loop of n = 1:300 it would be something like this (where name is your list of variable names):
eval([name(n,:) ' = curs.Data;'])
This will return 300 variables named 'options_20020208' or similar each containing one set of curs.Data. However, there are better ways of storing data in your workspace that will make further operations on your data easier, for example you could use structures:
myStruct(n).name = name(n,:);
myStruct(n).Data = curs.Data;
If you wanted to do some analysis and then save out all this data in some format, for example, it's going to be much easier to loop over the structure, and set the filename to [myStruct(n).name,'.csv'] and the file contents to mystruct(n).AdjustedData, etc., then to deal with 300 named variables.

Arrays in Rails

After much googling and console testing I need some help with arrays in rails. In a method I do a search in the db for all rows matching a certain requirement and put them in a variable. Next I want to call each on that array and loop through it. My problem is that sometimes only one row is matched in the initial search and .each causes a nomethoderror.
I called class on both situations, where there are multiple rows and only one row. When there are multiple rows the variable I dump them into is of the class array. If there is only one row, it is the class of the model.
How can I have an each loop that won't break when there's only one instance of an object in my search? I could hack something together with lots of conditional code, but I feel like I'm not seeing something really simple here.
Thanks!
Requested Code Below
#user = User.new(params[:user])
if #user.save
#scan the invites dbtable and if user email is present, add the new uid to the table
#talentInvites = TalentInvitation.find_by_email(#user.email)
unless #talentInvites.nil?
#talentInvites.each do |tiv|
tiv.update_attribute(:user_id, #user.id)
end
end
....more code...
Use find_all_by_email, it will always return an array, even empty.
#user = User.new(params[:user])
if #user.save
#scan the invites dbtable and if user email is present, add the new uid to the table
#talentInvites = TalentInvitation.find_all_by_email(#user.email)
unless #talentInvites.empty?
#talentInvites.each do |tiv|
tiv.update_attribute(:user_id, #user.id)
end
end

Resources