Slow blockchain syncing on TRON Nile Testnet - cryptocurrency

Trying to sync nile chain.
Starting node with command:
java -jar /data/FullNode/FullNode.jar --witness -c /data/FullNode/nile_net_config.conf
using config from <nileex.io>
Syncing is very slow, often stops syncing at all.
Just about 100-1000 blocks per day.
In tron.log I see: P2P_DISCONNECT reason TOO_MANY_PEERS
I tried /wallet/listnodes HTTP API command from public nile nodes and put all that 68 IP`s to seed section in config.
When got about 1600000 blocks, syncing stopped again, same errors in log.
Now I downloaded and unpacked backup output-directory and trying to sync to the end of chain but have same problem.
Blocks getting very slow. Sometime, maybe one times in a hour I getting 50-100 blocks.
What am I doing wrong?

I used old version of FullNode.jar
Now it`s ok with version 4.4.1.

I used old version of FullNode.jar Now it`s ok with version 4.4.1.
That's correct, after bumping version, error is gone, looking at java-tron repository seems like there are several different conditions for throwing TOO_MANY_PEERS one of it below:
#Override
public void run() {
List<PeerConnection> peerConnectionList = pool.getActivePeers();
List<PeerConnection> willDisconnectPeerList = new ArrayList<>();
for (PeerConnection peerConnection : peerConnectionList) {
NodeStatistics nodeStatistics = peerConnection.getNodeStatistics();
if (!nodeStatistics.nodeIsHaveDataTransfer()
&& System.currentTimeMillis() - peerConnection.getStartTime() >= CHECK_TIME
&& !peerConnection.isTrustPeer()
&& !nodeStatistics.isPredefined()) {
//if xxx minutes not have data transfer,disconnect the peer,
//exclude trust peer and active peer
willDisconnectPeerList.add(peerConnection);
}
nodeStatistics.resetTcpFlow();
}
if (!willDisconnectPeerList.isEmpty() && peerConnectionList.size()
> Args.getInstance().getNodeMaxActiveNodes() * maxConnectNumberFactor) {
Collections.shuffle(willDisconnectPeerList);
for (int i = 0; i < willDisconnectPeerList.size() * disconnectNumberFactor; i++) {
logger.error("{} does not have data transfer, disconnect the peer",
willDisconnectPeerList.get(i).getInetAddress());
willDisconnectPeerList.get(i).disconnect(ReasonCode.TOO_MANY_PEERS);
}
}
}

Related

OptaPlanner: timetable resuming generates a wrong solution, even if the generation starts from where it stopped

I am using Java Spring Boot and OptaPlanner to generate a timetable with almost 20 constraints. At the initial generation, everything works fine. The score showed by the OptaPlanner logging messages matches the solution received, but when I want to resume the generation, the solution contains a lot of problems (like the constraints are not respected anymore) although the generation starts from where it has stopped and it continues initializing or finding a best solution.
My project is divided into two microservices: one that communicates with the UI and keeps the database, and the other receives data from the first when a request for starting/resuming the generation is done and generates the schedule using OptaPlanner. I use the same request for starting/resuming the generation.
This is how my project works: the UI makes the requests for starting, resuming, stopping the generation and getting the timetable. These requests are handled by the first microservice, which uses WebClient to send new requests to the second microservice. Here, the timetable will be generated after asking for some data from the database.
Here is the method for starting/resuming the generation from the second microservice:
#PostMapping("startSolver")
public ResponseEntity<?> startSolver(#PathVariable String organizationId) {
try {
SolverConfig solverConfig = SolverConfig.createFromXmlResource("solver/timeTableSolverConfig.xml");
SolverFactory<TimeTable> solverFactory = new DefaultSolverFactory<>(solverConfig);
this.solverManager = SolverManager.create(solverFactory);
this.solverManager.solveAndListen(TimeTableService.SINGLETON_TIME_TABLE_ID,
id -> timeTableService.findById(id, UUID.fromString(organizationId)),
timeTable -> timeTableService.updateModifiedLessons(timeTable, organizationId));
return new ResponseEntity<>("Solving has successfully started", HttpStatus.OK);
} catch(OptaPlannerException exception) {
System.out.println("OptaPlanner exception - " + exception.getMessage());
return utils.generateResponse(exception.getMessage(), HttpStatus.CONFLICT);
}
}
-> findById(...) method make a request to the first microservice, expecting to receive all data needed by constraints for generation (lists of planning entities, planning variables and all other useful data)
public TimeTable findById(Long id, UUID organizationId) {
SolverDataDTO solverDataDTO = webClient.get()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/getSolverData",
organizationId)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("findById.fetchFails", "findById()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("findById.fetchFails", "")));
})
.bodyToMono(SolverDataDTO.class)
.block();
TimeTable timeTable = new TimeTable();
/.. populating all lists from TimeTable with the one received in solverDataDTO ../
return timeTable;
}
-> updateModifiedLessons(...) method send to the first microservice the list of all generated planning entities with the corresponding planning variables assigned
public void updateModifiedLessons(TimeTable timeTable, String organizationId) {
List<ScheduleSlot> slots = new ArrayList<>(timeTable.getScheduleSlotList());
List<SolverScheduleSlotDTO> solverScheduleSlotDTOs =
scheduleSlotConverter.convertModelsToSolverDTOs(slots);
String executionMessage = webClient.post()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/saveTimeTable",
organizationId)
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.body(Mono.just(solverScheduleSlotDTOs), SolverScheduleSlotDTO.class)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("saveSlots.savingFails", "updateModifiedLessons()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("saveSlots.savingFails", "")));
})
.bodyToMono(String.class)
.block();
}
I would probably start by making sure that the solution you save to the DB after the first run of startSolver() is the same (in terms of Java equality), including the assignments of planning variables to values, as the solution you retrieve via findById() at the beginning of the second run.

Laravel queue stops randomly without exception

I have a laravel queue setup with a
database
Connection. Note this problem is also on redis. But i am currently using the database connection for the
failed_jobs
Table to help me check any errors that occur during the queue process.
The problem i have is that the queue stops working after a few jobs without any message showing why. But when i restart the command (php artisan queue:work) it picks up the remaining jobs. And continues. (But stops again later)
The job is configured with these values
public $tries = 1;
public $timeout = 10;
The job code is, (Not original code)
public function handle()
{
try {
$file = //function to create file;
$zip = new ZipArchive();
$zip->open(//zip_path);
$zip->addFile(//file_path, //file_name);
$zip->close();
#unlink(//remove file);
} catch (\Exception $e) {
Log::error($e);
}
}
And the failed function is setup like this:
public function failed(\Exception $exception)
{
Log::error($exception);
$this->fail($exception);
$this->delete();
}
But my there is no failed_job row, And my log is empty
Edit: I added simple info logs after every line of code. And every time i start the queue, It stops after the last line. So the code runs correct. So laravel doesn't start the new job after that
so what you need here to solve the issue is to do the following steps :
go to bootstrap/cache/ remove all file .PHP
go to the src and run php artisan queue:restart
Now after adding the snippet, we need to trigger the following commands respectively:
sudo supervisorctl reread (to check the file content and make sure
that the snippet is correctly set)
sudo supervisorctl update (release the config changes under the supervisor)
sudo supervisorctl restart all (re-trigger the queues so that the newly created queue gets initialized and start picking up messages respectively)
Did you tried queue:listen ?
php artisan queue:listen
Also i guess you need the Supervisor to keep your worker alive.

Running failed test using RetryAnalyzer - not working as expected for test using data provider

I am using IRetryAnalyzer for running failed test cases and using IAnnotationTransformer for setting annotation at run time. For #Test using data provider its giving strange result.
I have set retry limit 3, that is test should re-run 3 times. Issue is :
If test fails for first data set, then it retries 3 times (as expected). Then for all remaining data set - re-run count is 2. I am not sure, its 2 retries or its 1 run 1 retry.
Here is class implementing data provider:
#Test(dataProvider = "data-source")
public void toolbarActionsOnShapes(String selectShape)
throws InterruptedException {
Assert.assertTrue(false);
}
#DataProvider(name = "data-source")
public Object[][] allShapes() {
return new Object[][] { { "Rectangle" }, { "Circle" }, { "Triangle" }
};
}
}
On running this i get output :
https://drive.google.com/open?id=1FxercluPinPiOOUAZKe_dMa6NvVMCE0j
For every set of data, if test fails - there should be 3 retries. Dummy project zip is attached for reference.
https://drive.google.com/open?id=1Mt7V2TO4TWRKU9dN4FIFzprkDingUKaE
Thanks !!
This is due to a bug that exists in TestNG 7.0.0-beta1. Please see GITHUB-1946 for more details.
I went ahead and fixed this as part of my pull request PR-1948
Please make use of TestNG 7.0.0-SNAPSHOT to get past this problem. This should be part of the upcoming TestNG 7.0.0-beta2 (or) 7.0.0 (final release). Its not decided on this part yet.

How to execute SSIS package when a file is arrived at folder

The requirement is to execute SSIS package, when a file is arrived at a folder,i do not want to start the package manually .
It is not sure about the file arrival timing ,also the files can arrive multiple times .When ever the files arrived this has to load into a table.I think, some solution like file watcher task ,still expect to start the package
The way I have done this in the past is with an infinite loop package called from SQL Server Agent, for example;
This is my infinite loop package:
Set 3 Variables:
IsFileExists - Boolean - 0
FolderLocation - String - C:\Where the file is to be put in\
IsFileExists Boolean - 0
For the For Loop container:
Set the IsFileExists variables as above.
Setup a C# script task with the ReadOnlyVariable as User::FolderLocation and have the following:
public void Main()
{
int fileCount = 0;
string[] FilesToProcess;
while (fileCount == 0)
{
try
{
System.Threading.Thread.Sleep(10000);
FilesToProcess = System.IO.Directory.GetFiles(Dts.Variables["FolderLocation"].Value.ToString(), "*.txt");
fileCount = FilesToProcess.Length;
if (fileCount != 0)
{
for (int i = 0; i < fileCount; i++)
{
try
{
System.IO.FileStream fs = new System.IO.FileStream(FilesToProcess[i], System.IO.FileMode.Open);
fs.Close();
}
catch (System.IO.IOException ex)
{
fileCount = 0;
continue;
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
// TODO: Add your code here
Dts.TaskResult = (int)ScriptResults.Success;
}
}
}
What this will do is essentially keep an eye on the folder location for a .txt file, if the file is not there it will sleep for 10 seconds (you can increase this if you want). If the file does exist it will complete and the package will then execute the load package. However it will continue to run, so the next time a file is dropped in it will execute the load package again.
Make sure to run this forever loop package as a sql server agent job so it will run all the time, we have a similar package running and it has never caused any problems.
Also, make sure your input package moves/archives the file away from the drop folder location.
As others have already suggested, using either WMI task or an infinite loop are two options to achieve this, but IMO SSIS is resource intensive. If you let a package constantly run in the background, it could eat up a lot of memory, cpu and cause performance issues with other packages depending on how many other packages you've running. So other option you may want to consider is schedule an Agent job every 5 minutes or 10 minutes or something and call your package in the job. Configure the package to continue only when a file is there or quit otherwise.
You can create a Windows service that uses WMI to detect file arrival and launch packages. Details on how to are located here: http://msbimentalist.wordpress.com/2012/04/27/trigger-ssis-package-when-files-available-in-a-folder-part2/?relatedposts_exclude=330
What about the SSIS File Watcher Task?

Provide a database packaged with the .APK file or host it separately on a website?

Here is some background about my app:
I am developing an Android app that will display a random quote or verse to the user. For this I am using an SQLite database. The size of the DB would be approximately 5K to 10K records, possibly increasing to upto 1M in later versions as new quotes and verses are added. Thus the user would need to update the DB as and when newer versions are of the app or DB are released.
After reading through some forums online, there seem to be two feasible ways I could provide the DB:
1. Bundle it along with the .APK file of the app, or
2. Upload it to my app's website from where users will have to download it
I want to know which method would be better (if there is yet another approach other than these, please do let me know).
After pondering this problem for some time, I have these thoughts regarding the above approaches:
Approach 1:
Users will obtain the DB along with the app, and won't have to download it separately. Installation would thereby be easier. But, users will have to reinstall the app every time there is a new version of the DB. Also, if the DB is large, it will make the installable too cumbersome.
Approach 2:
Users will have to download the full DB from the website (although I can provide a small, sample version of the DB via Approach 1). But, the installer will be simpler and smaller in size. Also, I would be able to provide future versions of the DB easily for those who might not want newer versions of the app.
Could you please tell me from a technical and an administrative standpoint which approach would be the better one and why?
If there is a third or fourth approach better than either of these, please let me know.
Thank you!
Andruid
I built a similar app for Android which gets periodic updates with data from a government agency. It's fairly easy to build an Android compatible db off the device using perl or similar and download it to the phone from a website; and this works rather well, plus the user gets current data whenever they download the app. It's also supposed to be possible to throw the data onto the sdcard if you want to avoid using primary data storage space, which is a bigger concern for my app which has a ~6Mb database.
In order to make Android happy with the DB, I believe you have to do the following (I build my DB using perl).
$st = $db->prepare( "CREATE TABLE \"android_metadata\" (\"locale\" TEXT DEFAULT 'en_US')");
$st->execute();
$st = $db->prepare( "INSERT INTO \"android_metadata\" VALUES ('en_US')");
$st->execute();
I have an update activity which checks weather updates are available and if so presents an "update now" screen. The download process looks like this and lives in a DatabaseHelperClass.
public void downloadUpdate(final Handler handler, final UpdateActivity updateActivity) {
URL url;
try {
close();
File f = new File(getDatabasePath());
if (f.exists()) {
f.delete();
}
getReadableDatabase();
close();
url = new URL("http://yourserver.com/" + currentDbVersion + ".sqlite");
URLConnection urlconn = url.openConnection();
final int contentLength = urlconn.getContentLength();
Log.i(TAG, String.format("Download size %d", contentLength));
handler.post(new Runnable() {
public void run() {
updateActivity.setProgressMax(contentLength);
}
});
InputStream is = urlconn.getInputStream();
// Open the empty db as the output stream
OutputStream os = new FileOutputStream(f);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024 * 1000];
int written = 0;
int length = 0;
while (written < contentLength) {
length = is.read(buffer);
os.write(buffer, 0, length);
written += length;
final int currentprogress = written;
handler.post(new Runnable() {
public void run() {
Log.i(TAG, String.format("progress %d", currentprogress));
updateActivity.setCurrentProgress(currentprogress);
}
});
}
// Close the streams
os.flush();
os.close();
is.close();
Log.i(TAG, "Download complete");
openDatabase();
} catch (Exception e) {
Log.e(TAG, "bad things", e);
}
handler.post(new Runnable() {
public void run() {
updateActivity.refreshState(true);
}
});
}
Also note that I keep a version number in the filename of the db files, and a pointer to the current one in a text file on the server.
It sounds like your app and your db are tightly bound -- that is, the db is useless without the database and the database is useless without the app, so I'd say go ahead and put them both in the same .apk.
That being said, if you expect the db to change very slowly over time, but the app to change quicker, and you don't want your users to have to download the db with each new app revision, then you might want to unbundle them. To make this work, you can do one of two things:
Install them as separate applications, but make sure they share the same userID using the sharedUserId tag in the AndroidManifest.xml file.
Install them as separate applications, and create a ContentProvider for the database. This way other apps could make use of your database as well (if that is useful).
If you are going to store the db on your website then I would recommend that you just make rpc calls to your webserver and get data that way, so the device will never have to deal with a local database. Using a cache manager to avoid multiple lookups will help as well so pages will not have to lookup data each time a page reloads. Also if you need to update the data you do not have to send out a new app every time. Using HttpClient is pretty straight forward, if you need any examples please let me know

Resources