ABP.IO CLI: Trying to use create an application from a custom template using templates\app folder from the repo - abp

I cloned https://github.com/abpframework/abp to D:\abp
I am following documentation: https://docs.abp.io/en/abp/latest/CLI-New-Command-Samples
I want to create MVC UI, Entity Framework Core, no mobile app, using the template in D:\abp\templates\app directory.
So I want to use:
-custom template
-local ABP framework references
I wanted to make a simple change inside this page:
D:\abp\templates\app\aspnet-core\src\MyCompanyName.MyProjectName.Web\Pages\Index.cshtml
from:
<h1>Welcome to the Application</h1> to
<h1>Welcome to the MY NEW APPLICATION</h1>
I issued this command:
D:\TestApps>abp new Acme.BookStore -t app -csf true -u mvc -d ef --tiered -separate-identity-server --no-random-port --connection-string "Server=localhost;Database=Test;Trusted_Connection=True" --template-source
D:\abp\templates\app --local-framework-ref --abp-path D:\abp
It seems that abp.io cli requires a .specific version of .zip file to be here: D:\abp\templates\app\ and theres folder needs to be zipped in app-4.4.3.zip:
-angular
-aspnet-core
-react-native
Is this intentional? Should the documentation mentioned that we need to zip folder for a particular type of starter template?
Here is the error:
[05:56:38 INF] ABP CLI (https://abp.io)
[05:56:39 INF] Version 4.4.3 (Stable)
[05:56:40 INF] Creating your project...
[05:56:40 INF] Project name: Acme.BookStore
[05:56:40 INF] Template: app
[05:56:40 INF] Tiered: yes
[05:56:40 INF] Database provider: EntityFrameworkCore
[05:56:40 INF] Connection string: Server=localhost;Database=TestRenaming;Trusted_Connection=True
[05:56:40 INF] UI Framework: Mvc
[05:56:40 INF] GitHub Abp Local Repository Path: D:\abp
[05:56:40 INF] Template Source: D:\abp\templates\app
[05:56:40 INF] Output folder: D:\TestApps\Acme.BookStore
[05:56:41 INF] Using local template: app, version: 4.4.3
[05:56:41 ERR] Could not find file 'D:\abp\templates\app\app-4.4.3.zip'.
System.IO.FileNotFoundException: Could not find file 'D:\abp\templates\app\app-4.4.3.zip'.

It seems it is by design that abp cli expects a zipped themplates with versioning. They should mentioned that in the documentation

Related

Azure DevOps React Deployment: Unable to run the script on Kudu Service. Error: Error: Unable to fetch script status due to timeout

I'm trying to deploy a CRA + Craco React application via Azure Devops. This is my YML file:
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '{REDACTED FOR SO}'
# Web app name
webAppName: 'frontend'
# Environment name
environmentName: 'public'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: '
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'My Subscription'
appType: 'webAppLinux'
WebAppName: 'frontend'
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|10.10'
StartupCommand: 'npm run start'
ScriptType: 'Inline Script'
InlineScript: |
npm install
npm run build --if-present
The build tasks succeeds. However, the deployment fails after running for ~20 minutes, with the following error:
Starting: Azure App Service Deploy:
==============================================================================
Task : Azure App Service deploy
Description : Deploy to Azure App Service a web, mobile, or API app using Docker, Java, .NET, .NET Core, Node.js, PHP, Python, or Ruby
Version : 4.198.0
Author : Microsoft Corporation
Help : https://aka.ms/azureappservicetroubleshooting
==============================================================================
Got service connection details for Azure App Service:'frontend'
Package deployment using ZIP Deploy initiated.
Deploy logs can be viewed at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/62cf55c3f1434309b71a8334b2696fc9/log
Successfully deployed web package to App Service.
Trying to update App Service Application settings. Data: {"SCM_COMMAND_IDLE_TIMEOUT":"1800"}
App Service Application settings are already present.
Executing given script on Kudu service.
##[error]Error: Unable to run the script on Kudu Service. Error: Error: Unable to fetch script status due to timeout. You can increase the timeout limit by setting 'appservicedeploy.retrytimeout' variable to number of minutes required.
Successfully updated deployment History at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/3641645137779498
App Service Application URL: http://{MYAPPSERVICENAME}.azurewebsites.net
Finishing: Azure App Service Deploy:
This YML solved my issues, along with upgrading Azure app service node version to 14.*
What I did is, I have moved the npm install and npm run build to build stage, and removed it from the deployment inline script stage.
So the package is ready before the deployment, after the successful unzip it will start the app using pm2 serve /home/site/wwwroot/build --no-daemon --spa as its running in a linux app service. (it works if your build directory is within wwwroot), if not please update the path accordingly
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: 'YOUR-SUBSCRIPTION'
# Web app name
webAppName: 'AZURE_APP_NAME'
# Environment name
environmentName: 'APP_ENV_NAME'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: NodeTool#0
inputs:
versionSpec: '14.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script: |
npm run build
displayName: 'npm build'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: poultry-web-test'
inputs:
azureSubscription: $(azureSubscription)
appType: webAppLinux
WebAppName: $(webAppName)
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|14-lts'
StartupCommand: 'pm2 serve /home/site/wwwroot/build --no-daemon --spa'

Problem with configuring Netlify CMS with Git Gateway

I am trying to use this Gatsby starter with Netlify CMS. https://github.com/stackrole-dev/gatsby-starter-foundation
I followed the instructions exactly but after enabling Git Gateway, when I try to login in as admin I encountered this error massage.
Your Git Gateway backend is not returning valid settings. Please make sure it is enabled.
I have no ideas why it is not working.
My config.yml is
backend:
name: git-gateway
commit_messages:
create: 'Create {{collection}} “{{slug}}”'
update: 'Update {{collection}} “{{slug}}”'
delete: 'Delete {{collection}} “{{slug}}”'
uploadMedia: '[skip ci] Upload “{{path}}”'
deleteMedia: '[skip ci] Delete “{{path}}”'
local_backend: true # run npx netlify-cms-proxy-server for local testing
media_folder: "static/assets"
public_folder: "/assets"
collections:
You need to enable your settings for git-gateway and external providers in your Netlify as shown in Netlify documentation:
This configuration can be found under https://app.netlify.com/sites/YOURNAME/settings/identity
In addition, your config.yml lacks:
backend:
name: git-gateway
repo: username/repository
branch: master
Note: change username and repository for your names.
You can enable git-gateway
Settings > Identity > Services

Laravel Project work perfect on localhost and Response with error 500 on GAE

I'm fairly new to working with live projects.
My project runs perfectly on localhost, I deploy the exact same copy to google app engine using the command gcloud beta app deploy.
My welcome page works perfectly:
As Well as my auth pages:
straight after the auth process i get the following response:
To verify that the account has been authenticated my route url is redirecting to the dashboard:
example.com/admin/users
my app.yaml file is as follows:
runtime: php
env: flex
runtime_config:
document_root: public
# Ensure we skip ".env", which is only for local development
skip_files:
- .env
env_variables:
# Put production environment variables here.
APP_LOG: errorlog
APP_KEY: App-key
STORAGE_DIR: /tmp
CACHE_DRIVER: file
SESSION_DRIVER: file
## Set these environment variables according to your CloudSQL configuration.
DB_HOST: localhost
DB_DATABASE: lara
DB_USERNAME: root
DB_PASSWORD: password
DB_SOCKET: /cloudsql/connection-name
MAIL_MAILER: smtp
MAIL_HOST: smtp.mailtrap.io
MAIL_PORT: 2525
MAIL_USERNAME:username
MAIL_PASSWORD: password
MAIL_FROM_ADDRESS: from#example.com
MAIL_FROM_NAME: {App-Name}
#we need this for the flex environment
beta_settings:
# for Cloud SQL, set this value to the Cloud SQL connection name,
cloud_sql_instances: connection-name
Here is my log?
This is the view it is looking for:
My routes:
It's funny how small things create the biggest issues, took me over 3 weeks to resolve this issue.
As I was conducting research I discovered that Google App engine is case sensitive and so here are steps I used to resolve this issue:
1st I checked my routes using php artisan route:list and my route is route: admin.users.index and my file structure was --path: views/Admin/Users/index.blade.php and so i change all my folders to lower-case to match the route.
Then I ran the following commands:
php artisan cache:clear
php artisan route:clear
php artisan view:clear
Lastly I added the following script under scripts on my composer.json file:
"post-install-cmd": [
"chmod -R 755 bootstrap\/cache",
"php artisan cache:clear"
]
Deployed using gcloud app deploy
Worked like a charm.

Driver:com.mysql.jdbc.Driver#75b3ef1a returned null for URL:jdbc for grails app on app engine

Followed the steps described here to connect my grails 3.2.9 app to google cloud-sql instance on google-app-engine flexible env
http://guides.grails.org/grails-google-cloud/guide/index.html#deployingTheApp
My grails version is as follows
==> grails -version
| Grails Version: 3.2.9
| Groovy Version: 2.4.10
| JVM Version: 1.8.0_131
My application.yml looks as follows
# tag::dataSourceConfiguration[]
dataSource:
pooled: true
jmxExport: true
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
test:
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE
production:
dataSource:
driverClassName: com.mysql.jdbc.Driver
dbCreate: create-drop
url: jdbc:cloudsql://google/{DATABASE_NAME}?cloudSqlInstance={INSTANCE_NAME}&socketFactory=com.google.cloud.sql.mysql.SocketFactory&user={USERNAME}&password={PASSWORD}&useSSL=false
properties:
When I run locally using
grails run-app
the app runs correctly
I run
./gradlew appengineDeploy to deploy and it deploys correctly
But when I try to open the scaffolded pages in the browser, I see the following error in the logs
==> gcloud app logs tail -s default
ERROR --- [ main] o.h.engine.jdbc.spi.SqlExceptionHelper
: Driver:com.mysql.jdbc.Driver#75b3ef1a returned null for
URL:jdbc:cloudsql://google/{DATABASE_NAME}?cloudSqlInstance=
{INSTANCE_NAME}&socketFactory=com.google.cloud.sql.mysql.SocketFactory&us
er={USERNAME}&password={PASSWORD}&useSSL=false
In addition the following error is also seen in the logs
ERROR --- [ Thread-16] .SchemaDropperImpl$DelayedDropActionImpl :
HHH000478: Unsuccessful: alter table property drop foreign key
FKgcduyfiunk1ewg7920pw4l3o9
Does the HH indicate that it is using the h2 database in production env?
Please help debug.
It seems the issue linked to hibernate. The same error for grails discussed here:
https://hibernate.atlassian.net/browse/HHH-11470
You are using MySQL because as you can see the error code say
Driver:com.mysql.jdbc.Driver#75b3ef1a returned null for
Your problem is that you need to configure the URL with your particular details changing {DATABASE_NAME} with the name of your database
you can see how to replace in the example at http://guides.grails.org/grails-google-cloud/guide/index.html#dataSourceGoogleCloudSQL

How to setup Mk livestatus on nagios?

I am using Check_MK based monitoring on Nagios.
Check_MK Version: 1.2.0p4
OS: Linux
Nagios Core 3.2.3
I want to fetch the Nagios page of remote server to local server using MK Livestatus.
I am curious, How could I achieve this?
Nagios Check_mk Multisite (plugin)
This plugin allow user to view/manage distributed nagios using single Web based Interface.
However by default it doesn’t support pnp4nagios graphs (hosts/services from remote nagios) access using (single) Multisite URL.
To access PNP4nagios graphs of hosts/services from remote nagios using (single) Multisite URL, we need to Add Apache Proxy redirect setting.
multisite.mk Conf file-
This is my “check_mk/multisite.mk” conf file. (from Primary multisite Server (production server), SITE1 and SITE2 are two remote nagios)
OMD[production]:~$ cat etc/check_mk/multisite.mk
…
….
sites = {
#Primary site
“local” : {
“alias” : “PRODUCTION”
},
# Remote site
“SITE1″: {
“alias”: “SITE1″,
“socket”: “tcp:XXX.XXX.X.XX:6557″,
“url_prefix”: “/SITE1/”,
“nagios_url”: “/SITE1/nagios”,
“nagios_cgi_url”: “/SITE1/nagios/cgi-bin”,
“pnp_url”: “/SITE1/pnp4nagios”,
},
# Remote site
“SITE2″: {
“alias”: “SITE2″,
“socket”: “tcp:XXX.XXX.X.XX:6557″,
“url_prefix”: “/SITE2/”,
“nagios_url”: “/SITE2/nagios”,
“nagios_cgi_url”: “/SITE2/nagios/cgi-bin”,
“pnp_url”: “/SITE2/pnp4nagios”,
},
}
….
…..
OMD[production]:~$
After making the changes in multisite.mk file the MK Livestatus of remote nagios will be visible at local site.

Resources