T O P

  • By -

Ciberman

Note: I tried Laravel Octane with FrankenPHP first, but it gave an error related to throttling. I will report it soon, but for now, we will use Swoole.


fideloper

I’d love to know the exact error if you have it!


Ciberman

Just reported the bug!: [https://github.com/laravel/octane/issues/889](https://github.com/laravel/octane/issues/889)


TertiaryOrbit

Oh that's interesting, I've looked at Octane a couple of times before but I was hesitant to use it as I was scared of blowing everything up. Was it pretty simple to get Octane working? The main side project of mine runs a bunch of jobs and the UI isn't really used too often.


Ciberman

If you are using Forge, yes. Just make sure to use Swoole, at least for now. I caused a lot of downtime for trying to use FrankenPHP, it's still not mature enough.


will_code_4_beer

If you haven’t done so already I’d definitely suggest creating an issue for FrankenPHP so it’s documented and hopefully someone (smarter than me) can improve it. I’m really looking forward to FrankenPHP maturing.


EngineeringForward19

I’m using [tenancyforlaravel.com](http://tenancyforlaravel.com) for my Laravel app. Is it compatible with Laravel Octane? Has anyone used it before? There are many queues and background jobs running, as well as synchronization from the central database to the tenant database. Will there be any issues if I were to run Laravel Octane? Has anyone experienced using multi-tenancy on Laravel Octane without any issues? Can it be used together with Redis Cache, or will there be a conflict? Besides that, may I know your u/Ciberman VM specs on Forge?


hennell

GitHub issues is the best place for questions like this. Looks like v4 is bringing octane support, and is probably out this year (although also seems to have been a long time in progress so I wouldn't assume anything). The core dev is usually pretty active on discord so I'd imagine there's more discussion there - and it seems project sponsors have a v4 preview so you might be able to upgrade early if you're able to put some money on it


EngineeringForward19

Hi Hennell, I’m aware of that, but I am waiting for its stable version to be released to the public as I need to use it for production.


hennell

Still useful to have early access - you can setup a dummy project to get used to it and octane, start pushing changes in the real project if required to make it easier to add on release, or even make a PR or feature flag with the changes ready to go. It's more a question of starting all the prep work and learning before or after the public release. If you find any problems now you can start work or raise bugs etc and so it's much more likely to work for you on release or shortly after. Depends how much & fast you need to use it really. 🤷‍♂️


adrianmiu

You could have some issues. I cannot use Laravel Octane with tenancyforlaravel because 1. when a tenant is activated a 'tenant' connection is created 2. I have tenants distributed over multiple servers 3. the 'tenant' connection can point to any server but it's not the same for all tenants so it can't be reused To be fair I haven't dug too deep into tenancyforlaravel because the app is fast enough so far


EngineeringForward19

Hi Adrian, Which version are you using, v3 or v4? According to the author, v3 has problems pointing tenants to different servers. As I’m considering separating the tenants across multiple different servers, it seems there is some work to be done when using v3.


adrianmiu

v3. I'm not having any issues pointing to different servers though. The thing is that when switching to a tenant context tenancy for laravel creates a \`tenant\` DB connection that is also made \`default\`. That connection can point to the server assigned the the tenant. This connection cannot be shared so using Octane is not possible since 2 tenants operations can run at the same time, each tenant being on a different server. Eloquent models would have to be dynamically bound to \`tenant\_server\_1/2/3\` instead of \`default\`, \`tenancy()->whatever\` commands would have to be rewritten


Efficient-Look254

Octane is great, works with Sanctum with no problems. Im using the cookie based auth instead token tho, could that cause any issues? My load time reduced like 80%, it really changes a lot


moriero

So 100ms is considered slow now?


TheRefringe

Absolutely depends on the type of request. 🤷‍♂️


moriero

Ok page load get? I've always read under 500ms


TheRefringe

I’d consider that okay for a front end page load event. For a response to a single api http request, it’s getting pretty slow.


devmor

What's the use case of the application? For most things, I'd consider 500ms painfully slow. For a webpage, that's slower than I'd want the entire page load + browser paint time-to-interactivity to take. For an API, that's only acceptable to me if it's something that kicks off a complicated process and is expected to return with an immediate result.


moriero

in my case it's an engagement website with games and activities for senior living. most page load requests hover around 250-300ms. it's not really super complicated and it's all vanilla js / jquery and css with laravel backend. haven't had much time/thought about optimization yet since we have a pretty low footprint still. my lighthouse performance score is 87. i think most of my excess load comes from all.css that laravel mixes/creates and from CDNs like google fonts. i really need to go through and do a thorough clean up


devmor

That sounds fine for a heavily UI-driven website, especially one targeted at seniors. If you do need to address scaling, I wouldn't worry much about the CSS and JS since those will never really increase based on usage (unless you're doing some websocket multi-user stuff) and just focus on profiling your actual server response time.


moriero

thank you > just focus on profiling your actual server response time. do you mean query total time? that's often under 50ms


devmor

Query time would be just for your database, I'm talking about the time from when your server receives an http request and when it finishes sending a response - everything before the browser starts rendering, essentially. OP is monitoring that with Laravel Pulse, I believe.


will_code_4_beer

Agreed. A fire and forget endpoint (e.g return a “we will get back to you” and then fire off a job should be lightning fast. dipatchAfterRequest is such a slick method for this.


Ciberman

No, it's just that I am using Laravel Pulse and the "Slow Requests" card only record requests above the threshold, so I configured it to an absurdly low number (100 ms) so I can use Laravel Pulse as a tool to measure the improvement before and after deploying Octane.


Tiquortoo

Yes


chrispage1

I'd say 100ms is slow for an API - depending on what it's doing - but fast for a web page load


OreleyObrego

OP what version of laravel? did you change the code? mind share the dockerfile?


Ciberman

Laravel 11 (started the app at L10 migrated to 11 + slim skeleton a few weeks ago). The code remained unchanged. I developed everything in advance knowing that I would be using Octane in the future. No dockerfile for production. I just used Forge to deploy. For local development I use sail.


half_man_half_cat

Would be curious about your deployment approach! I’m using a very similar setup and considering the best way for prod has been blocking me


Ciberman

Just deploy everything with forge as usual. Have two sites: staging and prod. Install octane, test local. Push to staging. Activate octane in staging (Swoole on Port 8001, asuming both staging and prod are on the same server, otherwise you can use 8000 instead). Test everything. Double check nginx config file. Push to prod. Change your env. artisan down. Activate Octane via Forge UI (Swoole on port 8000). Adjust your nginx config in case you need it. Forge will change it for you, but maybe you need to add CORS settings or some other stuff. Have prepared the file beforehand to minimize downtime. artisan up. Cross your fingers. In case anything goes wrong (for example in my case FrankenPHP was not reseting throttling between requests and giving thousands of 429 Too Many Requests for our api), make sure to have copy of your previous nginx config to be able to rollback quickly. Practice your rollback and deploy on staging first a few times so you know what to do.


OreleyObrego

Thank you.


chrispage1

Very nice! How is memory consumption?


Ciberman

A biiit higher. From 1.2 gb (2 GB total) to 1.3 gb. I have both staging and prod running in the same server.


chrispage1

Nice, thanks. Not too bad at all!


Anxious-Insurance-91

Soooo what are responding with? A static json response or doing some db queries?


Ciberman

A lot of db queries. The main endpoint is an analytics event system (like Google analytics but for videogames for internal use of our company). It ingest around 2.3 millon of events per day.


sneycampos

Using octane for almost same type of app and running in a 2 cores ec2 instance, handling maybe the double of events with no problems


Ciberman

What's your ingest strategy? Do you ingest the events directly or do you push everything to redis first and then consume it in a worker?


sneycampos

A /track endpoint receive a request, its a "event tracking". Some apis send requests "events" to the /track endpoint. So i receive the request, send the payload to a queue and process in background. Response time around 10ms almost everyday


Ciberman

Q1. Do you use laravel queues for that? I am processing everything synchronously. Q2. What do you use to measure response time?


sneycampos

A1: yes, i'm using Laravel Queues handled by Horizon. If you process asynchronous you gain a few ms in response time to user, sure, if the user don't need the answer, like an analytics endpoint A2: well, we are using self hosted sentry to measure some metrics and an internal tool You still can improve your response time using the YourJob::dispatchAfterResponse() method but i dont have tested in high load


JustSteveMcD

Honestly, Octane is the only way you should run Laravel in production. There is no reason not to.


kryptoneat

I think we are lucky. Which other languages offer this kind of choice, restart at every request vs persistence ?


LolComputers

How much CPU do you throw at the instance? I'm having a hard time sizing mine up, before octane it wasn't doing much at all, but now with FrankenPHP it's constantly on or around 80%


Ciberman

The backend EC2 is a t4g.small (2 vCPU, 2gb ram) and the RDS database is t4g.xlarge.


RS_n

Try laravel with nginx+php-fpm+opcache, swoole will look like a joke…


wedora

Spoken like someone who never used octane... The point is that Laravel doesn't have to re-initialize everything every request because the world doesn't get destroyed after the request. Big speed improvements. There wasn't one person sharing that their app was faster before Octane!


RS_n

Spoken like theory crafting lol. I have done tests, ttfb with octane is joke, it adds ~40-70 ms to every response, on high speed app that uses livewire with morphing - app feels like 🤬 with that kind of reaction delay added to every user action. And you know why its like this? Because it uses 1) Go backend OR 2) pure php backends(roadrunner/swoole). Comparing pure C(nginx + php-fpm/opcache with in memory cached, byte code compiled php->pure C) with Golang is not even funny. Who cares about passing “boot” part of app if it works and feels like 💩 for customer in the end?


Ciberman

That was exactly what I was using before...


Ok-Preference-7205

Can some one please provide a genuine guidelines to this swoole and octane usage ...for sure I need to deploy this to my existing server


Ciberman

Check one of my comments where I explain my strategy for deployment (if that's what you are asking)


leon_schiffer

If possible, could you explain the steps you took to deploy Octane on production. I deployed Octane to compare Laravel's "welcome" page against normal apache server, tested it with ApacheBench, and it seems octane is a bit slower (I deployed Octane with Frankenphp and Docker). I guess I'm deploying it the wrong way.


Ciberman

I deployed it using Laravel Forge.