Why are you leading your visitors to your channel on a monopolist site? To bring ad revenue? There's no need for video for your type of content in the first place.
I get it - a 2026 "hackers" campaign for binging yt. And in case you haven't noticed: appealing to the net neutrality debate of the last millenium is meaningless with just a bunch of monopolists left on the net profitting of vast public investments. The kind of thing traditionalist "hackers" in it for social recognition would be wasting their time on.
Because they're betting on the video finding its way onto people's feed, thus raising awareness among non-techy people. Hard to do that with a random website.
please. I don't understand how the fuck we still don't have p2p social networks and private sharing groups. The amount of possibilities to f* up any kind of control are massive - it's just that we end up writing some convoluted distributed mainframe when all people need is p2prss.
When you had gone to a site using a deep link, Safari insists on autocompleting any URL in the domain to that link you used even if you just want to go to the top-level URL/index of that site. You have to type out the entire URL and add a space at the end or something (and that still doesn't work sometimes) to stop iOS from doing that, which defeats the entire purpose of autocompletion. Btw switched off any autocorrection feature a long time ago. Still, I happen to mistype a lot compared to my old non-Apple phones (there was even an "it's not just you" article last year about it).
Apple needs to spend an entire release cycle to unfuck text entry and completion. However, with their qa lately (or lack thereof) they'd only manage to make it worse. The sad thing is they're still better than the alternatives, all things considered.
FIY the Wikipedia article rightfully says SGML CONCUR usage is uncommon, but compared to the stated alternatives for overlapping markup, it's basically the only one that is tolerable to use as actual markup language for use with a text editor. This is what it looks like:
<!doctype d -- element decls for a, b ... -->
<!doctype e -- element decls for a, x ... -->
<(d|e)a>
<(d)b>bla bla <(e)x>bla </(d)b> bla</(e)x>
</(d|e)a>
where the third "bla" span is marked up with overlap.
Basically, in case you've ever wondered, SGML CONCUR is the only reason that the element name in end-element tags needs to be specified. In strictly nested markup (XML) it always must refer to the most recently opened start-element tag hence it's redundant. SGML actually has "</>" but it didn't make it into XML.
Anything you'd like to share? I did some research within the realm of classic robotic-like planning ([1]) and the results were impressive with local LLMs already a year ago, to the point that obtaining textual descriptions for complex enough problems became the bottleneck, suggesting that prompting is of limited use when you could describe the problem in Prolog concisely and directly already, given Prolog's NLP roots and one-to-one mapping of simple English sentences. Hence that report isn't updated to GLM 4.7, Claude whatever, or other "frontier" models yet.
Opus 4.5 helped me implement a basic coding agent in a DSL built on top of Prolog: https://deepclause.substack.com/p/implementing-a-vibed-llm-c.... It worked surprisingly well. With a bit of context it was able to (almost) one-shot about 500 lines of code. With older models, I felt that they "never really got it".
> ISO "strings" are just atoms or lists of single-character atoms (or lists of integer character codes) [...]. Code written with strings in SWI-Prolog will not work in [other] Prolog.
That's because SWI isn't following ISO (and even moving away from ISO in other places eg. [1]).
ISO Prolog strings are lists of character codes period. It's just that there are convenient string manipulation-like predicates operating on atom names such as sub_atom, atom_concat, atom_length, etc ([2]). You'd use atom_codes to converse between atoms/strings or use appropriate list predicates.
Virtualization.framework was introduced in Big Sur. It builds on top of Hypervisor.framework and is essentially Apple's QEMU (in some ways quite literally, it implements QEMU's pvpanic protocol for example). Before QEMU and other VMMs gained ARM64 Hypervisor.framework support, it was the only way to run virtual machines on ARM Macs and still is the only official way to virtualize ARM macOS.
The new Tahoe framework you're probably thinking of is Containerization, which is a WSL2-esque wrapper around Virtualization.framework allowing for easy installation of Linux containers.
>a WSL2-esque wrapper around Virtualization.framework allowing for easy installation of Linux containers.
So Linux is now a first class citizen on both Windows and Mac? I guess it really is true that 'if you can't beat em, join em.' Jobs must be rolling in his grave.
Thief pretty much defined the stealth game genre, at least it did for me, where it's game over basically if you try to go all out on enemies. I may be wrong but I don't believe cleaning a level of enemies is the way forward in later levels.
You can get rid of all human enemies by knocking them unconscious (I play expert mostly so killing is forbidden anyway). But right, if you go rambo even on lower difficulty levels, you'll most likely get overwhelmed
For the rest, you're limited by supplies you buy or find but I believe it's possible to clear mostly everything if you don't miss. I know because I found myself running around the entire map to find the remaining 1% of the loot goal
> You can get rid of all human enemies by knocking them unconscious (I play expert mostly so killing is forbidden anyway). But right, if you go rambo even on lower difficulty levels, you'll most likely get overwhelmed
I can't recall if they're in Thief 1, but in Thief 2 at least there are guards with helmets which are immune to the blackjack, but afaik none of them are immune to gas arrows/mines.
Guess what, you're not required to open <html>, <head>, or <body> either. It all follows from SGML tag inference rules, and the rules aren't that difficult to understand. What makes them appear magical is WHATWG's verbose ad-hoc parsing algorithm presentation explicitly listing eg. elements that close their parents originally captured from SGML but having become unmaintained as new elements were added. This already started to happen in the very first revision after Ian Hickson's initial procedural HTML parsing description ([1]).
I'd also wish people would stop calling every element-specific behavior HTML parsers do "liberal and tag-soup"-like. Yes WHATWG HTML does define error recovery rules, and HTML had introduced historic blunders to accomodate inline CSS and inline JS, but almost always what's being complained about are just SGML empty elements (aka HTML void elements) or tag omission (as described above) by folks not doing their homework.
HTML becomes pretty delightful for prototyping when you embrace this. You can open up an empy file and start typing tags with zero boilerplate. Drop in a script tag and forget about getElementById(); every id attribute already defines a JavaScript variable name directly, so go to town. Today the specs guarantee consistent behavior so this doesn't introduce compatiblity issues like it did in the bad old days of IE6. You can make surprisingly powerful stuff in a single file application with no fluff.
I just wish browsers weren't so anal about making you load things from http://localhost instead of file:// directly. Someone ought to look into fixing the security issues of file:// URLs so browsers can relax about that.
Welcome, kids, to how all web development was done 25-30 years ago. You typed up html, threw in some scripts (once JavaScript became a thing) and off you went. No CMS, no frameworks. I know a guy who wrote a fully functional client-side banking back office app in IE4 JS by posting into different frames and observing the DOM returned by the server. In 1999. Worked a treat on network speeds and workstation capabilities you literally can’t imagine today.
Things do not have to be complicated. That abstraction layer you are adding sure is elegant, but is it also necessary? Does it add more value than it consumes not just at the time of coding but throughout the entire lifecycle of the system? People have piled abstraction on top of hardware from day one, but one has to ask, if and when did we get past the point of diminishing returns? Kubernetes was supposed to be the thing that makes managing vms simple. Now there are things supposedly making managing Kubernetes simple. Maybe, just maybe, this computer-stuff is inherently complicated and we’re just adding to it by hoping all of it can eventually be made “simple”? Just look at the messages around vibe coding…
Today you first need AI to figure ot what is the JS-framework-of-the-week and then you need AI to generate all the boiler plate code and then you use AI to debug all the stuff you created :-)
Yeah it was hard to believe when I first learned about it, but it's true. I think I first found out when I forgot to put in a getElementById call and my code still worked.
Also window.document.forms gets you direct access to all forms, "name" automatically attach an attribute to the parents and "this" rebind to the current element on inline event handler.
The DOM API may have been very messy at creation, but it is also very handy and powerful, especially for binding to a live programming visual environment with instant remote update capabilities.
Speaking of forms: form.elements.username is my preferred way of accessing form fields. You can also use a field .form prop to access its connected form. This is fundamental when the field exists outside <form> ;)
It's been there since the beginning but it has several exceptions, like it's not available in strict mode and modules. Ask your ChatGPT if implied globals are right for you.
I liked learning this so much that I created a VSCode Extension to enable goto clicking and autocomplete and errors for single page html files and type hover so I can properly use it when i am prototyping.
> Someone ought to look into fixing the security issues of file:// URLs
If you mean full sandboxing of applications with a usable capability system, then yeah, someone ought to do that. But I wouldn't hold my breath, there's a reason why nobody did yet.
Yes i love quickly creating tools in a single file, if the tool gets really complex I'll switch to a sveltekit Static site. I have a default css file I use for all of them to make it even quicker and not look so much like AI slop.
I think every dev should have a tools.TheirDomain.zzz where they put different tools they create. You can make so many static tools and I feel like everyone creates these from time to time when they are prototyping things. There's so many free options for static hosting and you can write bash deploy scripts so quickly with AI, so its literally just ./deploy.sh to deploy. (I also recommend writing some reusable logic for saving to local storage/indexedDB so its even nicer.)
Imagine a very plausible situation. You have 1 HTML file at the top that wants to access hundreds of files in a subfolder. There is no way you can show Allow | Deny for every one of them. On the other hand, it's also possible for someone to take that file and put it in a folder like Documents or Downloads, so blanket allowing it access to siblings would allow access to all those files.
This could easily be solved by some simple contract like "webgame.html can only access files in a webpage/ subdirectory," but the powers that be deemed such thing not worth the trouble.
I guess you're replying to my comment because you were triggered by my last sentence. I wasn't criticizing you specifically, but yeah, in another comment you're writing
> It probably didn't help that XHTML did not offer any new features over tag-soup HTML syntax.
which unfortunately reaks of exactly the kind of roundabout HTML criticism that is not so helpful IMO. We have to face the possibility that most HTML documents have already been written at this point, at least if you value text by humans.
The CVEs you're referencing are due to said historic blunders allowing inline JS or otherwise tunneling foreign syntax in markup constructs (mutation XSSs are only triggered by serialising and reparsing HTML as part of bogus sanitizer libs anyway).
If you look at past comments of mine, you'll notice I'm staunchly criticizing inline JS and CSS (should always be placed in external "resources") and go as far as saying CSS or other ad-hoc item-value syntax should not even exist when attributes already serve this purpose.
The remaining CVE is made possible by Hickson's overly liberal rules for what's allowed or needs escaping in attributes vs SGML's much stricter rules.
Inline JS or CSS is fine if typed directly by humans. It's only a problem when generated. Generated resources should always be in separate files.
I like the flexibility of being able to make one file HTML apps with inline resources when I'm not generating code. But there should be better protections against including inline scripts in generated code unintentionally.
I get it - a 2026 "hackers" campaign for binging yt. And in case you haven't noticed: appealing to the net neutrality debate of the last millenium is meaningless with just a bunch of monopolists left on the net profitting of vast public investments. The kind of thing traditionalist "hackers" in it for social recognition would be wasting their time on.
reply