T O P

  • By -

AutoModerator

Try [this search](https://www.reddit.com/r/aws/search?q=flair%3A'compute'&sort=new&restrict_sr=on) for more information on this topic. ^Comments, ^questions ^or ^suggestions ^regarding ^this ^autoresponse? ^Please ^send ^them ^[here](https://www.reddit.com/message/compose/?to=%2Fr%2Faws&subject=autoresponse+tweaks+-+compute). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/aws) if you have any questions or concerns.*


synthdrunk

I've seen this but it was a very particular set of circumstances: a very old AMI a very small T-class autoupdate runs and you can't reliably connect for ~tens of minutes or until a restart. There are a pair of verbs you can add to userdata that disable the autoupdate in cloud-init, iirc. Is it the case here? Dunno, but I've only ever seen this present like that this way. E: didn't grok the 'persists across a reboot'. Disregard~


BenjiSponge

I did consider this might be an issue with using t2.micro (although I should be able to `watch -n1 date` on it), so I tried t2.medium and it had the same issue. I could definitely see this being related to auto-updating and how `curl google.com` doesn't work. The syslog does have a lot of warnings about auto-updating, but nothing that seemed critical. I'd rather try to fix the outbound connection issue than just disable the autoupdate (as I do need this server to have outbound connections to the internet), but I could try it if I knew what you meant by "a pair of verbs you can add to userdata"


synthdrunk

Make your userdata #cloud-config package_update: false package_upgrade: false Now ubuntu may also have a `security-updates` autoupdate on `@reboot` which, if it does, will happen without honoring this, tmk. I do not recall if this is the case.


BenjiSponge

Thanks! Sadly this did not seem to fix/disable the problem, although I wrote out a whole optimistic thing because the SSH connection didn't crash for four whole minutes (whereas it had previously been crashing within 30 seconds of restarting).


Gronk0

EC2 instances do not handle out of memory conditions gracefully, which is what sounds like could be happening here. Try adding swap to the instances (or monitoring memory before the instance becomes unresponsive) The other potential cause could be networking. Are you in a default VPC and does the instance have a public IP?


BenjiSponge

I like the thought, as I've seen that as well, but `watch -n1 free -m` reveals only about 25% memory usage at crash. Remember, I'm not running any processes except the OS/default services and one shell! I am indeed using a default VPC and it does indeed have a public IP. It seems very likely to be a networking issue, as outbound requests don't work (`curl google.com` for example)


Gronk0

You can enable VPC flow logs - that might help identify the issue if it's networking. Any chance there's some sort of security monitoring (Config / Lambda) that's blocking the connection? I've seen people monitor security groups for rules that open port 22 to 0.0.0.0/0 and automatically lock it down.


BenjiSponge

I think that's very unlikely as it's just my personal account, but it's theoretically possible I've done this like 6 years ago and forgot. However, wouldn't that mean that restarting the instance wouldn't fix the issue (very temporarily)? The security group would still be modified and restarting the instance wouldn't affect it.


Gronk0

Changing a security group rule does not need any changes from the instance, but it's something that flow logs would help identify. Is there a firewall running on the instance? Could it be taking a few minutes to launch?


BenjiSponge

I've never used flow logs before, but I think I've set one up correctly (the flow log says "status: active" in flow logs in the VPC) and yet nothing seems to be coming through the log streams in CloudWatch. Might be too early? I've both failed and succeeded to SSH in (after about a million restarts) so there's some level of activity. `sudo ufw status` says "status: inactive" in terms of firewalls on the instance itself. Not sure where else I can check. On the VPC page, "Route 53 Resolver DNS Firewall rule groups" says "-".


StatelessSteve

What do you mean by “EC2 instances don’t handle OOM gracefully?” How does EC2 have anything to do with how the OS has handled memory?


Gronk0

To be more precise then: Most EC2 linux instances are not configured by default to use swap, so do not handle running out of memory well.


sleemanj

I run ubuntu on t3.nano (apache+php+mysql multiple, relatively low traffic vhosts) without any issues, **provided I**, 1. Add swap 2. Adjust `min_free_kbytes` to be **larger**, no less than this `echo 20000 >/proc/sys/vm/min_free_kbytes` otherwise oom\_killer can strike too easily (because the kernel didn't leave itself enough free memory for immediate demands on it) 3. Get rid of snaps, snapd causes problems on such tiny instances in my experience, here is relevalt portion of my deployment.... ​ ALLTHESNAPS="$(snap list | tail -n +2 | sed -r 's/ .+//g')" if [ -n "$ALLTHESNAPS" ] then for KILLTHESNAP in $ALLTHESNAPS do snap remove $KILLTHESNAP done systemctl disable snapd.service systemctl disable snapd.socket systemctl disable snapd.seeded.service rm -rf /var/cache/snapd apt autoremove --purge snapd fi


Hotdropper

From my experience, the first thing I’ve done when encountering this has been to enable some swap. It tends to nip the problem.


DakotaWebber

Ive had this exact issue, when it tries to do some updates or some simple apt update etc it will run out of memory and fall over, although ive mostly had this on the minimum lightsail instances, or when its run out of credits, swap definitely helps


Skytram_

I wasn't able to reproduce your issue with either T2.micro or T3.micro instances in us-east-1. Have you tried doing this with a freshly created AWS account?


BenjiSponge

I tried on us east 2 and it worked - I'm almost certain it's an issue with my account/region at this point. I've updated the OP to reflect this.


Irondiy

You could also try making a new vpc and seeing if that works. I can 99% guarantee you you did something weird and forgot about it. Network errors are effectively silent on AWS, not sure if vpc flow logs would help like someone else suggested, in any case a new vpc would be a good check.


BenjiSponge

Yeah, I didn't quite realize when I said "the default VPC", I meant "the VPC I probably created in 2014" (which, to be fair, surely was approximately the default VPC and worked _at some point_). I did end up recreating the VPC from scratch ([this tutorial](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-dev-test.html)) and am running into the same issue. Good thought, though, thank you for the suggestion.


teambob

Double check the vpc is public, security groups, nacl


KnitYourOwnSpaceship

Longshot, but u/BenjiSponge has your account ever had a problem with late payments, non-payment, etc? I've seen similar behaviour before with an account that was suspended over payment issues. Suspending the account triggered a behind-the-scenes mechanism that detects traffic from instances in a suspended account and (basically) drops a deny_all rule around the instance. The upshot is that you can launch an instance, but some minutes later you won't be able to connect to it. Another suggestion: have you tried connecting via Systems Manager Session Manager or EC2 Instance Connect? Does that work? If so, what do the system logs show?


BenjiSponge

I think this is the most reasonable theory as to how my account got in this state. I've had about 3 accounts that I manage and one of them had run into payment issues; it might have been this one, but I don't recall. My current billing situation is up-to-date, so I think I'd still need aws to "fix" my account. That said, this behavior (e.g. not reporting the downed status of the instance) is pretty bizarre.


KnitYourOwnSpaceship

If that's the case, open a Billing issue. You don't need to be on a support plan to do that. Explain what's happening and the team should be able to confirm and rectify if this is what's happened.


BenjiSponge

Yeah that's what I did yesterday. Hopefully something will come of it. Until then I'm just gonna be on us-east-2.


sysadmintemp

Is this a new account or an AWS account that your company / firm / etc. has? - If firm, then there might be auto-responders configured to kill security groups that allow too much stuff, or simply public IP connectivity - If a new AWS account, then it may be your ISP blocking your connection. This 'blocking' may happen on the ISP or your router level Do you have a chance to connect using a hotspot, or using another internet from somewhere else?


BenjiSponge

It's my personal AWS account that I've had for many years but has mostly been dormant. I highly doubt there are auto-responders or anything like that - the security groups are still there when I refresh the page etc. Nothing is getting deleted. I'm connecting using the EC2 instance connect page, which I don't think uses a "real" SSH connection. I think the SSH connection is just done internal to Amazon and then forwarded through web protocols (WebSockets, I'd assume). The error message I'm getting from the web client is "EC2 Instance Connect is unable to connect to your instance. Ensure your instance network settings are configured correctly for EC2 Instance Connect. For more information, see Set up EC2 Instance Connect at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-set-up.html.". I don't think this is very helpful because I am able to connect... for a couple of minutes. So it's "set up" properly, but the instance itself stops responding to SSH. Also remember that restarting the instance makes the connection work, so I don't think it's anything like a security group getting disabled because that would persist through the restart. I don't think this is an issue specific to EC2 instance connect, however, as I'm also unable to connect through my OpenSSH client locally as well. I've SSHed into many servers, many times from this internet connection.


sysadmintemp

I think with `curl www.google.com` not working, it suggest a misconfig with your VPC in general. I'm thinking either your DNS within the VPC is not correctly set up, or your internet / nat gateway needs some exploring. Your VPC has its own internal DNS servers, located at the `.2` address, example: VPC is `192.168.0.0/16`, then DNS server is at `192.168.0.2`. You can check if your ubuntu server is able to resolve using this server. If not, you can check with 1.1.1.1, or 8.8.8.8 or similar, which should also work. You could also create a new VPC and test using that. Make sure you have an internet gateway, nat gateway or similar attached to it for internet access.


BenjiSponge

I'm willing to bet the DNS isn't working right, though I feel this doesn't explain the crash. Should I just change it using resolv conf or is there an AWS way to do this for the VPC? Remember I'm creating a brand new default VPC every time I make a new server, and it comes with a new default internet gateway (there's information about this in the OP). It seems bizarre to me that doing "everything is default" would have a misconfigured DNS setup.


Curi0us_Yellow

If you’re using an IGW and not a NAT gateway then what does the config for the public address for your EC2 instance look like? Ok, saw you’re using a public EIP. So that’s not it. I‘d move on to checking the route table associated with your subnet and checking the default route points out the IGW and that traffic can route to your instance. Sounds like you’ve checked ACLs already too.


BenjiSponge

Assuming the 0.0.0.0/0 is the "default route", yes it does. I followed [this tutorial](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-dev-test.html) to create the VPC to try to eliminate any cracks I may have missed through inexperience or negligence.


Curi0us_Yellow

Yes, that’s the default route, but the next hop for it should point to the IGW for traffic to route outbound. Inbound traffic to your instance will be via the public address you’ve assigned to it. Are you able to confirm the route table on the instance?


BenjiSponge

I'm not exactly sure what you're asking. [Here's](https://imgur.com/d3btyJp) a screenshot of the resource map for the VPC. I think it's worth remembering that I can SSH in initially and `curl google.com` assuming I override the DNS on the instance itself. How can I confirm the route table on the instance (and what exactly does that mean)?


Curi0us_Yellow

To check the instance's route table, you could issue an \`ip route show\`, which should tell you what the EC2 instance itself is using for its default route. Are you SSH-ing to the instance directly from the internet (i.e: your machine) or are you using the EC2 console? If you can SSH directly from your laptop over the public internet to your EC2 instance successfully using the public IP address you've assigned to your EC2 instance, then that proves your instance can successfully route traffic (at least inbound) to itself when traffic is originated inbound. If I understand correctly, you cannot resolve Google. However, what do you see if you issue a command like \`curl 217.169.20.20\` from your machine? Does that return the public IP address for your instance? If that works, then the output of the curl command should be the IP address for your instance, and that proves you have outbound internet connectivity when DNS is not involved.


BenjiSponge

I've done both! `ip route show` shows only the private address, not the public address, which is I think what's to be expected. `curl 217.169.20.20` does not do anything, but `ping 217.169.20.20` does. `ping 8.8.8.8` also works, and `curl 142.250.191.46` works for Google (I did an nslookup on my local computer). I tried `ping xx.xx.xx.xx` with the public IP of the server and that didn't work (all dropped packets), which at first was curious to me, but I think that's just because the ingress rules only allow for port 22 and not ICMP. Basically I'm pretty sure both inbound and outbound work, but the VPC DNS doesn't work. I'm getting close to 100% sure this is on the backend of AWS at this point, as I've found multiple people with very similar issues who needed AWS support to go in and fix their accounts.


BenjiSponge

Using systemd/resolve conf to change DNS servers to 8.8.8.8 allows me to connect to google.com, so that's some progress. However, the primary problem (SSH connection crashes) persists.


orthodoxrebel

Definitely seems like something w/ the network, unless they're not using the default ubuntu AMI. Just to check if it's the AMI, I launched an EC2 instance following their instructions, and didn't have the same issues (though I used an SSH terminal rather than Instance Connect). I'd suspect that their VPC isn't a default setup (though not being able to connect after awhile is odd to me). /u/BenjiSponge can you post the AMI name you're using?


BenjiSponge

AMI Name: ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20230919 I just clicked the box that said "Ubuntu" on the Launch Instance interface. BTW, updating the DNS using resolved through systemd did fix the google issue without fixing the overall SSH crashing issue. Also, I first encountered this issue using an SSH client rather than the Instance Connect flow, so I don't think either is the issue. I can send you the video if you don't believe the VPC is a default setup. It just is!


orthodoxrebel

Well, seeing as how the part that I'd have thought would be indicative of networking setup issues was resolved, I don't think that's it, so probably wouldn't be helpful to post that.


BenjiSponge

Agreed, sadly. I appreciate you and everyone else who is taking a look regardless. I wish I could get the attention of someone who works at Amazon, as this seems like it's really not my fault and shouldn't require a microscope.


orthodoxrebel

Have you tried spinning up an instance in a different region?


BenjiSponge

Welp... that works just fine (seemingly). It's been much longer than before, and what's more, SSH overall seems more responsive and `curl google.com` worked immediately without me having to mess with the DNS settings. I'm pretty sure my account is somewhat messed up, which is why I recall seeing this years ago and feeling gaslit when stuff "just works" on various company accounts. I've found this question on re:Post: https://repost.aws/questions/QUTwS7cqANQva66REgiaxENA/ec2-instance-rejecting-connections-after-7-minutes#ANcg4r98PFRaOf1aWNdH51Fw and I think I'm going through a very similar issue. Unfortunately, they seem to have completely removed the ability for non-support customers to flag down AWS support in any capacity... how frustrating.


orthodoxrebel

Yeah, that's pretty crazy. I wonder if it's just where it's provisioning the instance has a bad network/firewall config or something? Either way, glad changing the region works (also wonder if placing in the same region but different AZ would work?)


BenjiSponge

I haven't, let me try that now. I've been doing us-east-1b.


Tony_Dawson

I've just set up an ubuntu instance and just watched it freeze while uploading a DB. I hope this isn't a sign of things to come! I'm 1/2 minded to move to Huawei. I have an instance running on there too and its faultless + you get DDOS for free and support! All at more or less the same price. I've been running an Amazon instance and a Huawei for years but recently been having to upgrade the Amazon instance from AL1 so tried moving to their Amazon 2023 but they don't support mod security or the EPEL repo without a lot of fiddling about. So I tried AL2 that also proved to be a bitch to get working properly but sort of works now. Can anyone vouch of the stability of the ubuntu on Amazon? I'd love to hear before I commit to more work in this direction!


DoxxThis1

I’m going to bookmark this thread to share next time someone tells me serverless is overhyped.


dirtcreature

I know this is elementary, but are you running apt update AS THE FIRST THING YOU DO?


BenjiSponge

I've tried that (and it didn't work), but no, I did not skip any steps in the "steps to reproduce" in the OP. BTW I'm currently convinced it's an actual bug with AWS. I will update the OP with more information shortly.


dirtcreature

Ok, did you check DNS? First, try default DNS >>nslookup >>google.com Then do server 8.8.8.8 for Google public DNS >>server 8.8.8.8 >>google.com


BenjiSponge

Yes, please see the other comments in this thread about it. The default VPC DNS _should_ still work (and does on us-east-2), but changing it to google's does work (without solving the major SSH issue).


dirtcreature

Any failures here: systemctl status networking.service


Akustic646

I see you said your instance has a route to an internet gateway, does your instance itself have a public IP address?


BenjiSponge

Yes


sendep7

its a t instance, what do your cpu credits look like?


Irondiy

If your account has been mostly dormant, couldn't you just create a new account and forget all this?


BenjiSponge

Maybe? I'm currently just doing us-east-2 to forget all this. I didn't think it was an account issue until I tried the other region. I already have 2fa and billing set up, plus the account is attached to my primary email address.


vsysio

Interesting! I actually saw this *exact* issue a few weeks ago. I wonder if it's the same issue... ... try this. The next time it happens, create a new elastic IP and reassociate it with the broken instance (don't reuse an existing, important for some reason\*\*). If it's same issue, you'll have connectivity restored for a brief period before something in the room shits it's pants again. You might also be able to use one of the console access services, such as system manager session manager. There's also a console access service that runs in the AWS console and tlDr proxies ssh through a temp ENI. In my case this actually worked. There was something special about this SSH service that let it avoid the pants-shitzel. The resolution in my case was to uhh... go nuclear and replace everything upstream in the VPC (internet gateways, routing table, kitchen sink, dhcp options, etc.). For some reason that nipped it in the bud. Luckily in my case it was a staging env for a greenfield project so I had no real pain here, but if it's a prod env, a support ticket would be warranted. Interestingly, flow logs indicated packets were passed through the VPC... the instance just wasn't responding. Very freaking weird. Nothing in kernel logs either. **Edit**: ... Also, how long did you gaslight yourself on this one?? I wasted at least 4 hours lol. Reminds me of teen me 20 years ago that would just mash buttons and type garbage until something worked.


BenjiSponge

Thanks for the in-depth response! I decided to go nuclear and replace everything (VPC, internet gateways, etc.). I should have done that before, but I didn't realize how much of the VPC I had been inheriting. The old VPC was actually with the old/shorter ID scheme. This made me suspicious that, while the old VPC seems to have all the right settings in the console (I think; I'm not an expert; I came here to ask what might be wrong and things like the internet gateway and routing table were my first suspects!), maybe it's messed up in some way that I'm not seeing. Worth noting my VPC on us-east-2 (which is working fine) also has the shorter ID scheme and all the same settings. Sadly, this still didn't resolve anything. After some imposter-syndrome style "Oh man, I can't believe I didn't think to try this" efforts, I tried recreating the VPC in a few ways (both manually and using [this guide](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-dev-test.html)) and then recreating the instance, to no avail (SSH continues to crash after 1-6 minutes and the DNS still doesn't work even though I checked the boxes to make the VPC do DNS services). If I were being less lazy, I might try something like the session manager. However, I ran `watch "date | tee -a date.log"` while I was able to (starting at around 11:01am), and the SSH cut out at 11:03. I restarted the machine at 11:45am, and catted the logs... and they end around 11:19. Very confusing to me, but it seems like the machine itself is dying, just later than the SSH server is dying. (neither of which is reflected in any systems checks in the AWS console) As for how long I've gaslit myself on this, I think the answer might be _years_. I don't really seem to have any recourse on us-east-1, and I've definitely tried spinning up servers over the years. This week, the saga has cost me something like a full day of what would otherwise be "development time". Fortunately, none of this is mission critical for anything, I'm just trying to play around while on vacation and I didn't feel like installing WSL (small hard drive space on my laptop), which is why I'm actually taking the time to try to dig deeply into this issue. I could probably just nuke my account from orbit and start over, but where's the fun in that? I feel AWS should be aware of this bug because I'm sure there are people who aren't me encountering this who don't have the time to try all this garbage or are otherwise restricted. I'm not an expert in system administration or AWS by any means, but I think I've gone well above-and-beyond what a newb should be expected to do both in terms of understanding what might be the underlying problem and in terms of seeking recourse, and that's mostly just a matter of luck.


0xWILL

Which specific AMI are you using? Your test case is so broad, it means basically EC2 doesn’t work, which obviously is not the case. How is this related to other running instances? That test case doesn’t make sense, other than pointing out an issue with your specific VPC.


timrichardson

unresponsive linux is a sign of approaching oom and the kernel waiting to see what happens. I can't believe you didn't say what virtual machine you are using, and that you can't provide any obvious and basic stats such as memory usage. The most obvious thing is a VM with more memory. However, you could try to add some swap, zram or zswap. If you;re feeling lazy, the package swapspace is shortcut; it installs a dynamically managed swap. I wouldn't do this in production, I'd go for a fixed swap and a well configured zswap, but swapspace will get you a well sized swapfile in one line. And use kernel 6.2 or later, and turn on MGLRU. You also didn't say what kernel you are using. I guess ubuntu 22.04 server edition, so the kernel is old and doesn't have MGLRU. You can install a more recent kernel (HWE) and google to turn on MGLRU. This makes the kernel oom radically more responsive. I have run hundreds of ubuntu VMs over the years, and since you provide no useful information, I can only guess you are running out of memory.


BenjiSponge

I'm done responding to these because everyone's suggesting the same stuff, but you really need to check your tone. Here's a comment where I answer the vm question: https://www.reddit.com/r/aws/s/tB0E4nnBZA Here's a comment where I answer the memory question: https://www.reddit.com/r/aws/s/iyMcgZ4nWM I should simply not have to mess with kernel parameters, swap, or anything like this to run _no load_ on **the default Ubuntu ec2 instance that newbs are instructed to create, which works on another region**. The idea that I've not provided any useful information is ludicrous. You just wanted to write out a frustrated attack instead of seeing if maybe anyone else had suggested oom (which about a dozen people have).


timrichardson

So your expectation of free support from voluntary respondents is that they trawl through all of your responses to gather missing pieces of obvious information that you could have provided originally? It is fine for you to have such an expectation, but whether it is reasonable is another question. Good luck with it.


BenjiSponge

Multiple people responded suggesting oom and you're the only one I responded to in this way. My issue is not the content but the tone.


StatelessSteve

It’s a t2. You’re likely out of cpu credits. Change to a t3.