stardustsystem

joined 2 years ago
[–] stardustsystem@lemmy.world 296 points 1 week ago (11 children)

Couldn't have happened to a nicer guy.

No really, if he was a nicer guy this probably wouldn't have happened.

[–] stardustsystem@lemmy.world 9 points 3 weeks ago

They'd better Run run run run run run awaayyyyyyy

[–] stardustsystem@lemmy.world 33 points 4 weeks ago* (last edited 4 weeks ago) (4 children)
[–] stardustsystem@lemmy.world 106 points 1 month ago* (last edited 1 month ago) (4 children)

If there's someone prepared to argue in court about why the UK's Act is a terrible idea, holy crap is it NOT 4chan

[–] stardustsystem@lemmy.world 34 points 1 month ago (2 children)

The original Silent Hill games were created by an internal group within Konami called Team Silent. This team was formed by underperforming Konami staff to work on a Resident Evil competitor that Konami higher-ups wanted to underperform and give them a chance to fire said underperforming staff. Instead, they struck gold with SH1.

Team Silent suddenly had a level of fame to themselves, but they were still Konami's red-headed stepchild. 2-4 were similarly successful, with 2 being the high point of the series.

After SH3, key staff started to depart from Konami. By the time the 5th game, Homecoming, came out, much of the original team had been replaced with those more aligned with Konami bigwigs that wanted to turn the series into a moneymaker instead of the quiet success it was built as.

An example of this is the Silent Hill HD Collection for PS3 and X360. The 'remaster' of SH2 and 3 was anything but. They had to make these remasters with incomplete codebases, no original staff to question, even some of the original voice acting was missing because Konami straight-up deleted the original source code.

There are those that say that SH never recovered from the gutting of Team Silent, and they tend to get louder with each entry into the series. Sometimes they have valid points, but not always.

[–] stardustsystem@lemmy.world 6 points 1 month ago

Gorillaz - Sunshine in a Bag

[–] stardustsystem@lemmy.world 22 points 1 month ago (2 children)

Sure. God told Bush there were WMDs in Iraq, too.

[–] stardustsystem@lemmy.world 3 points 2 months ago

END THE BABADOOK ERASURE

[–] stardustsystem@lemmy.world 21 points 2 months ago (2 children)

You mean Military Operation Censorship? Or have we dropped that pretense?

[–] stardustsystem@lemmy.world 9 points 2 months ago (1 children)

I use Nextcloud for contacts, calendars, files, bookmarks, passwords, to do lists, Kanban boards, and recipies. You absolutely can turn nextcloud into a 365 competitor if that's your jam

[–] stardustsystem@lemmy.world 6 points 3 months ago (1 children)

You might want to set up dynamic DNS for your domain. If you're hosting from a residential internet connection then your ISP will change your address eventually. Ddclient can be used to report your current IP to your Registrar regularly, so if it changes the domain moves along with it.

 

Hello everybody, happy Monday.

I'm hoping to get a little help with my most recent self-hosting project. I've created a VM on my Proxmox instance with a 32GB disk and installed Ubuntu, Docker, and CosmOS to it. Currently I have Gitea, Home Assistant, NextCloud, and Jellyfin installed via CosmOS.

If I want to add more services to Cosmos, then I need to be able to move the containers from the VM's 32GB disk into an NFS Share mounted on the VM which has something like 40TB of storage at the moment. My hope is that moving these Containers will allow them to grow on their own terms while leaving the OS disk the same size.

Would some kind of link allow me to move the files to the NFS share while making them still appear in their current locations in the host OS (Ubuntu 24.04). I'm not concerned about the NFS share not being available, it runs on the same server virtualizing everything else and it's configured to start before everything else so the share should be up and running by the time the server is in any situation. If anyone can see an obvious problem with that premise though, I'd love to hear about it.

 

Hey folks! Hope your day's going good.

I'm hoping someone else has had this problem or knows the application enough to where they can help me. I'm moving my main desktop from W10 to linux (Q4OS, Debian-based) and it's gone well so far.

The only thing I truly need Windows for is work, so I've decided to build a Win11 VM on my Proxmox server and remote into it when I need to do work there. Install went smoothly, and my M365 user is the Admin of the W11 box. Remote Desktop is enabled, and my user is added to the Remote Desktop Users group on the local machine.

I had issues remoting in from anywhere, but after researching I was able to make a shortcut that worked on a Windows machine by adding the below options to the .rdp file. With these added, a web page opens and takes me through M365 authentication, and then I remote in.

username:s:.\AzureAD\name@domain.tld

enablecredsspsupport:i:0

authentication level:i:2

`Note: email address changed for anonymity'

I've tried and failed several different ways to remote into this machine via Remmina. It works as described from Windows machines, but Remmina doesn't seem able to open the webpage that lets me sign in. Instead, I get Remmina's login prompt which I've so far been unable to log in through. This occurs whether I create a profile from scratch or if I import the previously-mentioned RDP file.

I have 2 Windows 10 VMs which are just regular solo machines, and I have no trouble remoting into them, it's just the Azure/Entra joined machine that causes this.

I'd like to use my Azure account on the VM so I can keep work at work, so to speak, and so I don't have to activate Windows (a license is included in my business account). If anyone's got some kind of solution or can tell me how to apply the options above to Remmina, I'd love to know how.

 
 
 
 

Hi selfhosted! Hope you're having a good day :)

I'm pretty new to self-hosting and have been traipsing through a minefield attempting to make NextCloud AIO work inside Docker. The instance runs for a few days/weeks and then starts getting extremely slow on the website, then dies entirely. Usually, either the ClamAV or Apache containers get stuck in an unhealthy state that no number of reboots or reinstalls can fix.

Quick context for how this all works. I have one machine that runs Proxmox and a group of VMs for various purposes. One such VM runs my Nextcloud. This VM is running Ubuntu 23.10, Docker, and the NextCloud AIO package.

Another VM hosts OpenMediaVault, which contains a set of SMB Shares mounted to the host VM that act as storage for NextCloud. The symlinks (I think I'm using that word right) on the host VM have user and group permissions updated according to AIO's documentation. Proxmox is configured to boot this VM first, then boot the rest in sequence once the files are available.

Right now I've got Nextcloud handling Synchronization of Files, Calendars, Contacts, and Kanban boards via the Deck Extension. Everything else can be abandoned at this point, these are the only functions I'm truly using. If this gives you an idea for an alternative app I'd love to hear it.

So after AIO broke for about the 5th time in the 8 months since I started trying to self-host it, I've been looking at alternatives. Before I go that route, I want to try installing Nextcloud without Docker. Some of the posts I've read here suggest that the Docker distribution of NextCloud has serious issues with stability and safely installing updates.

I plan to make a new VM entirely for this, Distro undecided. I still want to run it as a VM and still use my SMB shares for bulk storage.

So where would I begin if I planned to install NextCloud directly to the VM rather than through Docker?

view more: next ›