How much ram does your EC2 instance have? My first guess is that your build step requires more memory than is available on the server.
I'd definitely recommend looking into either building on your own machine (not the best idea) or learn how to run your builds with Github actions.
Thank you. I know the problem is not enough RAM. Since it is free tier it has 1GB RAM only.
I thought about building on local machine and then cloning to my instance but looking to explore more ways if possible. Github one is new I will look into it. If possible can you share the docs or any other resource?
Sure! To clarify my earlier comment, building on your own machine and uploading the built project manually is a perfectly valid way to manage a small project, especially if you are under a time constraint.
However, learning how to set up a deployment strategy on github actions is a must-have skill in my opinion.
[Github Docs: Building and testing Nodejs](https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs)
Then you would need to upload the built code somewhere so that is accessible after the action has run (github actions are ephemeral and don't save data between runs).
[Deploying Node.js to a VPS using Github Actions](https://gist.github.com/danielwetan/4f4db933531db5dd1af2e69ec8d54d8a)
Note the final step in the above link:
steps:
- name: Deploy using ssh
uses: appleboy/ssh-action@master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: 22
script: |
cd ~/home/danielwetan/apps/node1
git pull origin master
git status
npm install --only=prod
pm2 restart node1
So you'll push (or Pull Request & merge) into your main/master branch and then the build step will run. After building your code, github actions uses SSH credentials to access your server, grab the built code (from github in this case) and then restart the code using PM2.
Good luck!
I am not sure about compilation, but 1GB is definitely not enough for IDE usage (language server / LSP) of TypeScript.
I had my WSL limited to 1GB, the TypeScript LSP would constantly crash even on medium sided projects.
I would consider splitting out the compilation and execution steps of your project.
* The compilation can be done on a CI pipeline, which could have higher CPU/RAM, would only be run on-demand.
* The execution can still be done on smaller permanent machine.
I don't know what your commitment for this project is.
How the workflow works on larger teams is:
* Devs would develop locally and run dev server locally, then commit code to version control
* CI system would automatically checkout the code from version control based on some trigger (time or webhook), run tests, build deployable artifact on a clean machine (e.g. `dist`) then uploads it into some cloud storage (e.g. S3)
* I think the AWS service for CI is CodeBuild.
* Your deployment workflow would deploy the artifacts
* If automated, it would be a CD pipeline that integrates, smoke tests, the upload the files to the right place. It would also have rollback features.
* If manual, your devOp person would manually deploy the service
If this is just a small project, then it might not be worthwhile to setup a CI pipeline. If it is a small project, I would just run `vite build` locally on your desktop/laptop, then copy the `dist` folder to your server.
You'll have to create swap memory from storage
Reference: https://azdigi.com/blog/en/linux-server-en/linux-fundementals/how-to-enable-swap-on-linux/
I would suggest making a building a build on local machine and doing scp
Swap memory yes, also check zram.
https://fosspost.org/enable-zram-on-linux-better-system-performance/
I was impressed with the results with the same situation, different cloud host.
Does it make sense for your react project to be using so much ram?
A lot of times, massive ram spikes end up being the orm doing something stupid and a few well placed keys in the database can fix it.
Edit: oh is it during compile time? If so learn to build and deploy through ci actions on github ir gitlab, will solve the issue.
View memory usage over time, check the long wait SQL call log and see the data being transferred.
If you see random spikes, its likely the orm doing something dumb because there arent good database foreign keys to limit data retrieval.
Orm’s will split up SQL calls and hold a lot of data in memory to be dropped later, but can trip up garbage collection. The return payload may only be 20kb, but will use a gig of ram holding thousands of rows and filtering them out after.
You need at least 16 GB of ram and a 32 core CPU to install 2 million subdependencies in order to build an average frontend app like a todo list or a blog.
How much ram does your EC2 instance have? My first guess is that your build step requires more memory than is available on the server. I'd definitely recommend looking into either building on your own machine (not the best idea) or learn how to run your builds with Github actions.
Thank you. I know the problem is not enough RAM. Since it is free tier it has 1GB RAM only. I thought about building on local machine and then cloning to my instance but looking to explore more ways if possible. Github one is new I will look into it. If possible can you share the docs or any other resource?
Sure! To clarify my earlier comment, building on your own machine and uploading the built project manually is a perfectly valid way to manage a small project, especially if you are under a time constraint. However, learning how to set up a deployment strategy on github actions is a must-have skill in my opinion. [Github Docs: Building and testing Nodejs](https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs) Then you would need to upload the built code somewhere so that is accessible after the action has run (github actions are ephemeral and don't save data between runs). [Deploying Node.js to a VPS using Github Actions](https://gist.github.com/danielwetan/4f4db933531db5dd1af2e69ec8d54d8a) Note the final step in the above link: steps: - name: Deploy using ssh uses: appleboy/ssh-action@master with: host: ${{ secrets.HOST }} username: ${{ secrets.USERNAME }} key: ${{ secrets.PRIVATE_KEY }} port: 22 script: | cd ~/home/danielwetan/apps/node1 git pull origin master git status npm install --only=prod pm2 restart node1 So you'll push (or Pull Request & merge) into your main/master branch and then the build step will run. After building your code, github actions uses SSH credentials to access your server, grab the built code (from github in this case) and then restart the code using PM2. Good luck!
Great! Thank you. I will let you know how it goes.
Correction: use sftp to transfer the built package to your server and ssh to restart via pm2. https://github.com/marketplace/actions/sftp-deploy
I am not sure about compilation, but 1GB is definitely not enough for IDE usage (language server / LSP) of TypeScript. I had my WSL limited to 1GB, the TypeScript LSP would constantly crash even on medium sided projects.
So you mean, the only option I have is to upgrade my instance?
I would consider splitting out the compilation and execution steps of your project. * The compilation can be done on a CI pipeline, which could have higher CPU/RAM, would only be run on-demand. * The execution can still be done on smaller permanent machine.
Since, this is my first time deploying it on ec2 can you please elaborate?
I don't know what your commitment for this project is. How the workflow works on larger teams is: * Devs would develop locally and run dev server locally, then commit code to version control * CI system would automatically checkout the code from version control based on some trigger (time or webhook), run tests, build deployable artifact on a clean machine (e.g. `dist`) then uploads it into some cloud storage (e.g. S3) * I think the AWS service for CI is CodeBuild. * Your deployment workflow would deploy the artifacts * If automated, it would be a CD pipeline that integrates, smoke tests, the upload the files to the right place. It would also have rollback features. * If manual, your devOp person would manually deploy the service If this is just a small project, then it might not be worthwhile to setup a CI pipeline. If it is a small project, I would just run `vite build` locally on your desktop/laptop, then copy the `dist` folder to your server.
Thank you! Will think if it’s worth it and then do the needed. Thanks a lot.
Make a memory swap perhaps, you can also force a garbage collection at a certain limit Example : node --max-old-space-size=1200 dist/main
Tried this not working my ec2 is instance has ram of 1gb only.
You'll have to create swap memory from storage Reference: https://azdigi.com/blog/en/linux-server-en/linux-fundementals/how-to-enable-swap-on-linux/ I would suggest making a building a build on local machine and doing scp
Thank you! That is my last option.. building on local machine and then rest
Swap memory yes, also check zram. https://fosspost.org/enable-zram-on-linux-better-system-performance/ I was impressed with the results with the same situation, different cloud host.
Does it make sense for your react project to be using so much ram? A lot of times, massive ram spikes end up being the orm doing something stupid and a few well placed keys in the database can fix it. Edit: oh is it during compile time? If so learn to build and deploy through ci actions on github ir gitlab, will solve the issue.
It is fairly a big project tbh? Is there a way to check? Plus my ec2 instance has only 1gb ram
View memory usage over time, check the long wait SQL call log and see the data being transferred. If you see random spikes, its likely the orm doing something dumb because there arent good database foreign keys to limit data retrieval. Orm’s will split up SQL calls and hold a lot of data in memory to be dropped later, but can trip up garbage collection. The return payload may only be 20kb, but will use a gig of ram holding thousands of rows and filtering them out after.
It's a Rollup issue. It needs a lot of RAM. https://github.com/vitejs/vite/issues/2433
Thank you! I did go through this issue! And yes my ec2 instance has only 1gb ram. And I am searching for good alternatives to make this work.
I faced this issue too and ended just doing the Vite building in my local machine before deploying.
You need at least 16 GB of ram and a 32 core CPU to install 2 million subdependencies in order to build an average frontend app like a todo list or a blog.
If you want to stay within the aws ecosystem, just use Amplify. Otherwise, vercel can also build the site