For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | oskarth's favoritesregister

Love Tails, but I haven't used it in ten years. I have had Tails and Qubes disposable VMs on my mind though.

I switched off of Qubes last year to my own Alpine chroot with a hand crafted kernel and initrd that lives only in memory. I find turning off the computer when I'm finished and having it forget everything to be a very peaceful way to compute. I owe the internet a write up.

I feel like ramfs for root filesystems is an underused pattern more broadly. "Want to upgrade? Just reboot. Fallback? Pick a different root squashfs in the grub menu"


> "Misinformation" is just another word for "falsehood" or "untruth."

That’s not sufficiently true. In fact, asserting untrue propositions is one of the easiest-to-counter ways of misinformation.

Real pros use humbuggery; of a set of n true propositions, pick a subset m to lead the audience to your conclusions and you haven’t even “lied”.

That’s why “fact checking” is such a popular way of narrative laundering, because truthiness of individual propositions alone never reveal if someone was bullshitting you.

That’s also why the courtroom maxim is “truth, nothing but the truth, and the whole truth”. Only those 3 properties in combination would exclude misinformation. (Not saying courtrooms necessarily live up to this maxim.)

I agree with the spirit of the rest of your argument.


I basically set up my GDB with commands to stop on a specific pattern of "lseek, then close", and if the pattern isn't met it just automatically continues the program.

This is what the gdb script looks like:

    set height 0

    catch syscall close
    catch syscall read
    catch syscall lseek
    disable 1 2

    commands 2
     disable 1 2
     continue
    end

    commands 3
     if $rdi == 31
      enable 1 2
      continue
     else
      continue
     end
    end
The lseek catchpoint (3) enables both read and close catchpoints; if the read catchpoint (2) is hit first it disables both and continues. This way we look for lseek followed by close without intervening reads.

It generates a few false positives but otherwise fairly quickly stops on the right syscall, at which point I could backtrace and prod the live program.


Hi Alan! I've got some assumptions regarding the upcoming big paradigm shift (and I believe it will happen sooner than later):

1. focus on data processing rather than imperative way of thinking (esp. functional programming)

2. abstraction over parallelism and distributed systems

3. interactive collaboration between developers

4. development accessible to a much broader audience, especially to domain experts, without sacrificing power users

In fact the startup I'm working in aims exactly in this direction. We have created a purely functional visual<->textual language Luna ( http://www.luna-lang.org ).

By visual<->textual I mean that you can always switch between code, graph and vice versa.

What do you think about these assumptions?


If anyone wants a "weird old tip" for better enunciation: finishing schools had an exercise where students would practice speaking with their mouth full of marbles or gravel. After talking out loud for a few minutes, one can spit them out and have perceptibly better enunciation.

You can do this more safely by biting down on a wine cork. Read a page or two out of a book out loud, remove, and then read another. It's quite remarkable. (I learned this trick from a theater class back in high school I think…)

https://www.activepresence.com/blog/public-speaking-articula...


Here is a proven approach for at least the first part, building foundations and being ready for graduate work. Many Berkeley Ph.D. students passed through this route. Get the book "Berkeley Problems in Mathematics." It contains historical problems from the Berkeley math prelim exam, and solutions. Now don't look at any solutions yet.

This is the exam all Berkeley math Ph.D. students must pass within three semesters of arriving to stay in the program, and the fail rate is about 50%.

You will also need reference books, advanced undergraduate and beginning graduate textbooks. Buy, download, or borrow as appropriate.

Pick a problem (start with the older ones, they are easier). Set aside 30-60 mins and try to solve it. No devices, no references at all, go to a library or a coffee shop without your devices. Dont' give up till time is over. If you cannot (usually the case), still don't look at the answer. Hit the reference books (don't look up the problem online either, it will go right to the answer and you won't learn much). Read and try to understand enough so that you can solve the problem. It is ok if you solve it this way (in the course of reading about it).

For bonus points, students studying for the exam will typically take entire old exams (available from the Berkeley website), take that to the library and just sit down for three to six hours and try to solve all the problems correctly. Then self-grade harshly. When you can do that for a recent exam (and get a good score), you will have more or less mastered undergrad math to the point that you could teach it.

Most important: you have to struggle to solve problems. Reading a solution is about as useful as watching someone else lift weights: you get minor tips on form but not any stronger.


I'll concur with this on most levels. The change I made was from a slightly different place: boxing. I joined a school that did BJJ and Muay Thai.

It worked out beautifully. After staring at a screen and sitting down for 8-10 hours, I could spend an hour or two getting the shit kicked out of me. Since I lived in the city and was traveling against traffic, it had a chance to die down while I was working out. Aside from the endorphin rush that accompanied the workout and the physical changes that accompanied struggling to the point of immobility 3x/wk, I got a nice confidence boost which stayed with me through most of the job interviews that followed after I relocated. It was like all the panic and nervousness of the week got expelled in the span of six hours. For six months after I stopped (moved across the country for grad school) I could sit through job interviews thinking, "There is nothing this person can do to me physically, mentally, or socially which I cannot handle. This person has no power over me. I am in control. I am the authority."

I might try ballroom dancing when I have the time again, as I think it's more practical from a social perspective than fightin'. (Seriously. The knowledge and confidence to dance wins 10/10 in social scenarios.) Still, try both!


Smart, well-meaning people on boards such as this refuse to see the Prison Industrial Complex for what it really is...a jobs program for police, courts, judges, jails, prisons, and of course, lawyers and politicians.

One group "increased incarceration" definitely has a non-marginal positive impact on are all-of-the-above.

Can you show me any examples, anywhere in history, where a group that directly benefits from and controls the entry level and duration of access have ever decided to lower their wages and put at jeopardy their livelihoods?

Because at the heart of it, that's the real effect that decreasing incarceration will have...closing jails, firing police officers, lowering ridiculous attorney fees, and shutting down county court complexes.

And we are surprised when the very people whom benefit from the status quo don't see things the way we do?


I began my study of Chinese in 1975. I got to be good enough in reading Chinese to set the ceiling of scores obtained in an experimental administration of a test for Chinese language for speakers of English. (The test was not further developed by the test developers, but I saw the summary of norming administration scores.) I worked for quite a few years as a Chinese-English interpreter for government-sponsored people-to-people visits to the United States. I was just reading Chinese-language newspapers via Google News just before bopping over here to Hacker News.

Simply put, the biggest single mistake learners of Chinese make in learning Chinese characters is learning them only in isolation and not also by reading connected Chinese-language text in which Chinese characters appear in "compounds" (meaningful Chinese words). The full argument for doing this is developed by the late John DeFrancis (a brilliant language teacher and textbook author) in the front matter of his book Beginning Chinese Reader,[1] which is still a very worthwhile book decades after it was first published. DeFrancis built in a lot of spaced repetition in the book lessons, back in the days before "spaced repetition" was a trendy term, but he crucially also makes sure the lesson texts illustrate the various ways that Chinese characters enter into compounds to form high-frequency words in the modern Chinese vocabulary. Try it if you'd like to more Chinese in the most efficient way possible.

[1] http://www.amazon.com/Beginning-Chinese-Reader-Part/dp/03000...

P.S. for a more extensive discussion of how to learn natural human languages effectively, see a comment of mine from about two years ago that was very popular with other readers of HN:

https://news.ycombinator.com/item?id=6302276#up_6302816


You can have a security key on Google without a phone number, but you have to enroll with a phone number and then remove it. Google won't allow you to remove the phone number unless you have some other backup 2FA method available.

The best Google (actually: everywhere) 2FA "stack" in Q1 2017 is:

1. Hardware U2F

2. Software TOTP (Authenticator app on your phone)

3. Physically secured backup codes

4. Disabled SMS

My experience with the Y4 hardware and, particularly, software has not been great. I'm using the Y4 for SSH access (through ssh-compat mode on gpg-agent) and it's mostly OK, but if I try to use PIV mode on the same token I run into all sorts of problems. I'm bullish on U2F, but bearish on Unix token applications.


If people built more things instead of sold them to people maybe there'd be less need for advertising (or politicians). When the value is actually clear there is far less need to convince people.

Make things that work, sell your vision not the product. People will buy into your vision, and accept more product shortcomings and cheer you on as you improve.


I need to upload invoices every month from all ~20 SaaS products I subscribe to an accounting software. Most of the invoices can be just redirected from email to another SaaS that will let me download a zip file containing all invoices from a date range. Other software requires me to login to the product, navigate to a page and download a PDF or print an HTML page. I have browser-automated all of these laborious ones as well so everything will be in that zip file. Saves me 30 min monthly and especially saves me from the boring work.

It's entirely intentional. The point isn't just to transpose the numbers from one medium to another; it is to understand what the numbers mean. The best way to do that is the way Martin does it in that video. That's how anyone who does this professionally does it.

I work as a scientific programmer and I used to have that reflex; even deriding others who couldn't or wouldn't automate such things for wasting time. To me, 'data' was just a blob, to be looked at as little as possible.

I've done a 180 on that. The other day I started work on making a plant database, by hand; from designing the schema (columns and sheets in Excel really - blasphemy!) to typing in the values from encyclopedias, wikipedia and books by hand (ok, the latin names I copy/paste). Yes I could just use one of the several large, well-known databases; or one of the hundreds of specific-purpose ones. But making this database has taught me so much already, things I never would have learnt if I'd spend that time on writing import scripts.

Nowadays I let my students/analysts first do extensive eda, which is usually lots of tedious work that seems a waste of time to the programmer instinct. But it's not.


This is perceived as a big problem by many people. Here are some current options:

CSV:

You download CSV data from your bank (manually, probably). Then you convert it to ledger journal format using one of a number of tools: CSV2Ledger, reckon, Ledger, hledger (both have CSV support built in by now), or something you write yourself. This data is "single-entry" and doesn't know about your chart of accounts, so you augment and translate it into more useful general journal entries. There are several approaches being explored:

- rule-based - you set up rules, matching patterns in the description, which assign accounts and balancing postings. Eg: CSV2Ledger, hledger (currently; eg http://hledger.org/manual#csv-files).

- history-based - the CSV description is matched against transactions already in the journal, and the most similar one is used as a template to flesh out the new transaction. Eg: Ledger (http://ledger-cli.org/3.0/doc/ledger3.html#The-_003csamp_003...).

- artificial intelligence - I seem to remember reckon does something else more clever ?

OFX:

- manual download - as above, but you download an OFX file. OFX records provide more structure than CSV but still need to be fleshed out. ledger-autosync does this, using history matching. It also skips transactions you've already saved to the journal, avoiding duplicates.

- automatic download - ledger-autosync can also handle the download, if your bank provides OFX Direct Connect. This is the most automated option at the moment, recommended. I pull transactions daily from Wells Fargo this way (though I'm going to quit one of these days since they charge too much for it).

GNUCash:

- In the distant past, ledger could read GNUCash files directly, so you could enter with GNUCash and report with ledger. This feature is long gone, but you might be able to get it working with an old Ledger version, and `print` into a journal file you can then use with modern *ledger.

Converting from other finance apps, eg mobile ones:

- As above, you might find adhoc ways to import from apps with nicer data entry. Eg use the iXpensit Pro app on iphone, export the CSV, convert that, add crazy automation duct tape until it's "smooth".

hledger web:

- hledger-web has an add form. It's not very good, but it's in your browser, and if you're really keen you could set it up to be accessible from your smartphone.

hledger add:

- hledger's built-in add command does assisted (history-based, tab completion etc.) interactive data entry on the console. Some folks may find that preferable to editing a text file.

Editor modes:

- for people used to text editors: ledger-mode provides some data entry conveniences for Emacs users, and there's vim-ledger, a ledger bundle for TextMate, etc.

HOWEVER...

If like me you're working on building discipline and insight into your finances, you may find nothing beats manual data entry for a while. You don't get the same awareness when everything is automated.


The reasons why I tried and failed with ledger:

- I never quite understood how to 'start' the ledger balance. Should I start on the first of the year when my account was at $56 or today when it's at $76 ... where does this $ come from?

but probably more this:

- I found it super laborious to enter transactions. I guess I grew up post-checkbook but when I'm not near my computer for a few days and have a pocketful (or not) of transactions to enter, I would get behind and then just give up. Even with logging into my bank account to cheat.

- Investments man. How does one track dividends, buy/selling in a text file. I probably need to take a finance class... :)


Tired of CRUD? What about learning CRA (Create Read Append)?

There are so many places where people did add "time" information to CRUD DBs because of the 'U' and 'D' that it's not even funny.

Learn a CRA DB and go apply to companies using today technologies that shall be used in the future mostly everywhere.

Most CRUD DBs I've been working with would have had absolutely zero space issue had they been using a CRA DB. And this would have solved so many issues. The only downside of "CRA vs CRUD" is that CRA DBs tend to be bigger (not than the CRUD one who did poorly re-model time that said)... But with today's hardware and especially memory growing up so fast and going down in price so quickly (the two being related but not identical), it's really not a problem anymore for 99.99% of the companies out there to simply store facts in a "ever growing, append-only" DB.

Just an example: there are many times where someone up the chain asks for some information and you either have to ask the DB guys to give you a backup of the prod DB at "time X" on some DEV/PREPROD environment or you have to go fetch business information by parsing logs.

These are two major fails. And they're mostly related to the fact that most CRUD DBs are modeled as "non factual". The 'U' and 'D' are irreversible operation losing business information and developers time.

So learn about CRA DBs like Datomic (which you can, btw, back with a SQL DB like PostgreSQL) and then go apply to companies who "saw the light".


I've meant to compile this list, so you just inspired me. I wish the Durants had had the stamina for one more volume. I don't know much about the European revolutions of 1848 and would have like to read about that time from them. I already had read a lot of ancient Greek literature in translation as well as The Pentateuch, all the historical books of the Old Testament and the Gospels. Western Civilization springs from intellectual roots in Athens and Jerusalem, so any survey has to include that heritage. My degree is in Comparative Literature, so I put a lot of stock in origninal source material. The only languages I read are English and German, so everything is in English or translation except Faust, which IMO would be a waste in translation. The books I read for my survey (this was my survey to fill in my personal Bildungslöcke) are not all classics, and not even all to be recommended. It may appear surprisingly heavy on 20th century, but a lot of intellectual threads I thought I should understand better got kicked off in that century. In more or less chronological order by authorship or content (and I've probably forgotten a few):

Euclid The Elements

Arrian The Campaigns of Alexander

Garmonsway (trans.) The Anglo-Saxon Chronicle

Komroff (ed.) The Travels of Marco Polo

Haydn The Counter-Renaissance

Braudel The Mediterranean and the Mediterranean World in the Age of Phillip II, Vol I&II

Pascal The Thoughts

Spinoza The Ethics

Christianson In the Presence of the Creator: Isaac Newton & his Times

Newton Opticks

Newton Principia (get the modern English translation by U.C. Press)

Hampson The Enlightenment

Voltaire Candide

Rousseau The Social Contract

Boswell Life of Johnson

Goethe Faust

Phillips The Cousins' Wars

Schom Napoleon Bonaparte

Heidler & Heidler Old Hickory's War

Babbage On the Economy of Machinery and Manufactures

de Toqueville Democracy in America

Darwin On the Origen of Species

Foote The Civil War: A Narrative

Maurois Disraeli

Twain Life on the Mississippi

Spector Admiral of the New Empire, The Life and Career of George Dewey

Cardwell The Norton History of Technology

Abbott Flatland, A Romance of Many Dimensions

Meyer & Brysac Tournament of Shaddows, The Great Game and the Race for Empire in Central Asia

Doughty Travels in Arabia Deserta

Lefevre The Golden Flood

Massie Dreadnought, Britain, Germany, and the Coming of the Great War

Lawrence Seven Pillars of Wisdom

Durant The Story of Philosophy

Cardozo The Nature of the Judicial Process

Schapiro The Communist Party of the Soviet Union

Popper The Logic of Scientific Discovery

Kershaw Hitler: 1889-1936 Hubris

Blemenson (ed.) The Patton Papers (1940-1945)

Hayek The Road to Serfdom

von Mises Human Action

Skinner Walden Two

Chambers Witness

Kuhn The Structure of Scientific Revolutions

Guevara Guerrilla Warfare

Cleaver Soul on Ice

Lacey The Kingdom, Arabia & the House of Saud

Durant The Lessons of History

Hackworth Lessons Learned, Vietnam Primer

Quigley Tragedy & Hope: A History of the World in Our Time

EDIT: Just remembered two more

Hobbes Leviathan

Locke Two Treatises of Government


The most useful pattern I know of for offline web apps is the command queue.

Basically, the rendered state of the client is the acknowledged server state plus the client-side command queue.

User actions don't make a server request and then update the UI. Instead they directly append to the local command queue, which updates the UI state, and right away the client begins communicating with the server to make the local change real.

While the client's command queue is nonempty, the UI shows a spinner or equivalent. If commands cannot be realized because of network failure, the UI remains functional but with a clear warning that changes are waiting to be synchronized.

(The API for connectivity status are useful for making sure that the command queue resumes syncing when the user's internet comes back.)


Sure: use Fernet.

https://github.com/fernet/spec/blob/master/Spec.md

It's an informal standard, like Noise, or WireGuard, or Curve25519, or Nacl. It's also so simple that JWT nerds will likely believe it's missing something. It is: the JWT/JOSE vulnerabilities.

It used to be that we got things working and then standardized them. Now we build cryptosystems de novo in standards committees and spend the next 10 years writing papers about the resulting flaws. Ok, it didn't used to be that way, and we've always been writing papers about flaws in crypto standards. I don't know what to say about this, except "stop, somehow".


I started meditating in 2007, and it has completely changed my life. The benefits I've noticed are too numerous to list, but for example:

- My ability to sustain attention and keep a lot of details in mind (say, for writing code) has improved significantly. Before I started my practice, my ability to code was declining with age.

- I'm much less reactive than I used to be. I have more ability to choose what I say and how I act. Of course, I still can and do make unskillful choices.

- Rather than expend energy on judgement, on right vs. wrong, I now focus on skillful vs unskillful. Meditation does change your view of the world. You realize you can't control your mind. So it's not, "I think, therefore I am." It's, "I think, therefore there are thoughts." A healthy relationship to your thoughts removes many obstacles.

The thing about meditation is that you have to actually do it. No amount of knowledge of the various techniques can substitute for actual practice. For this reason, as michael_dorfman says, you really do need a teacher. Don't just read the menu, eat the food.

Buddhist Geeks[1] is a great podcast. It has interviews with various Buddhist and non-dual teachers.

In Berkeley, there's an excellent pay-what-you-want 6 week beginner class offered by James Baraz[2]. The first session is presented without any Buddhist stuff. The remaining sessions have Buddhist content, to elucidate the foundations of practice. Plenty of people who are not interested in Buddhism take the class. Many of James' talks are on DharmaSeed[3], another great source of Buddhist audio.

[1] http://www.buddhistgeeks.com/ [2] http://www.insightberkeley.org/calendar.html [3] http://dharmaseed.org/teacher/86/


When I was learning to play the violin, my teacher had a very strict method for going through the repertoire. You had your working piece, which was supposed to be hard - it was stretching the abilities of what you could do as a violinist. You had your polishing piece, which was your previous working piece where you were finessing all the fine points of technique and musicality. You had review, which was all the other pieces you'd learned so far. And you had your preps, which were little passages (4 measures or so) from your working piece that were so hard that you played them over and over again, way slowed down, until you got them right and could speed them up and incorporate them into your working piece.

I've tried to do something similar with my work so far - a working codebase that I'm just learning, a polishing codebase that I basically know my way around in, and various tweaks and bugfixes that I have to do for previous projects. It's a bit harder in the corporate world though - while Google engineers are given a lot of freedom to pick their projects, they're still subject to the needs of the business, and sometimes a project will come up that's such a great opportunity for professional visibility that I'd want to take it even if it involves working hard on something I already know well and dropping the polishing of stuff I just learned.


In grad school I managed to take advantage of the pacing effect in an educational setting. I was teaching linear algebra. What I did was make the homework incremental - 1/3 of homework on today's material, 1/3 on the previous week, and 1/3 anything in the course. Those thirds were in increasing order of difficulty.

I also started every class with a question/answer period. The rules were simple, the questioning will last at least 10 minutes, and you don't want me to ask the questions. :-) Anything that had come up in the questions that seemed to be a point of confusion was sure to be added to the next homework set.

I won't go into what else I did with that class, but the end result is worth thinking about. First note that I gave a ridiculously hard final. Other grad students who saw it thought that the class would bomb. Secondly they aced the test. What do I mean by aced? Well I had a bonus question which fellow grad students thought nobody would get. 70% of the class got that question, and a good fraction were over 100% on the test. So they must have studied hard, right? Nope. I ran into some students several months later. They told me that they tried to study for the final and stopped after a few minutes because it was useless, they knew everything. And several months later they still knew much of the material cold!

The thing is that none of what I did was very radical. The principles have been known for a century. Psychologists have been trying to get people to listen for that long. I learned about it in the 80s from a university course I watched on TV. (British Columbia had a TV channel devoted to lectures for correspondence courses.)

Yet, despite how dramatic the effects are, nobody listens and nobody takes advantage of it.


Couldn't agree with this article more.

I built the biggest social network to come out of India from 2006-2009. It was like Twitter but over text messaging. At it's peak it had 50M+ users and sent 1B+ text messages in a day.

When I started, the app was on a single machine. I didn't know a lot about databases and scaling. Didn't even know what database indexes are and what are their benefits.

Just built the basic product over a weekend and launched. Timeline after that whenever the web server exhausted all the JVM threads trying to serve requests:

1. 1 month - 20k users - learnt about indexes and created indexes.

2. 3 months - 500k users - Realized MyISAM is a bad fit for mutable tables. Converted the tables to InnoDB. Increased number of JVM threads to tomcat

3. 9 months - 5M users - Realized that the default MySQL config is for a desktop and allocates just 64MB RAM to the database. Setup the mysql configs. 2 application servers now.

4. 18 months - 15M users - Tuned MySQL even more. Optimized JDBC connector to cache MySQL prepared statements.

5. 36 months - 45M users - Split database by having different tables on different machines.

I had no idea or previous experience about any of these issues. However I always had enough notice to fix issues. Worked really hard, learnt along the way and was always able to find a way to scale the service.

I know of absolutely no service which failed because it couldn't scale. First focus on building what people love. If people love your product, they will put up with the growing pains (e.g. Twitter used to be down a lot!).

Because of my previous experience, I can now build and launch a highly scalable service at launch. However the reason I do this is that it is faster for me to do it - not because I am building it for scale.

Launch as soon as you can. Iterate as fast as you can. Time is the only currency you have which can't be earned and only spent. Spend it wisely.

Edited: formatting


I've been doing the following for several years, and I think it's a good low commitment way of learning, i.e. there's a lot of bang for the buck.

1. Make a ~/git/scratch repository.

2. Whenever you see a code snippet in an interesting blog post, don't just read it. Copy it into a subdir of ~/git/scratch and run it. Write shell scripts to automate the process of running it. Prove to yourself on your computer that it works.

The first few times, it may be a little onerous. But eventually you will fall into a groove and it will take 60 seconds or less each time.

I don't promise to understand it on the first pass. Just the act of downloading it and running it gets it into your brain. Half the time you end up hacking on it anyway, and other times, you don't understand it, but when you see something related later, the light bulb will go off in your head -- "that's similar to something in my ~/git/scratch repo". And then you can go from there.

I don't know why but having a running code snippet really makes it feel "ready at hand" and you will learn faster. It somehow primes your subconscious. I feel like there are a lot of people who read Hacker News a bit passively, without retention.

Here's a good blog post along those lines, with code: http://journal.stuffwithstuff.com/2013/12/08/babys-first-gar...

-----

A much bigger commitment, but with correspondingly bigger benefits: I found helpful was to reimplement like 10 different things I use in maybe 500 - 1000 lines of Python. I like the new "500 lines or less" book [1] -- I was doing this 10 years ago!

Once you implement some class of program, you have a very good idea of how the "real" libraries you use are implemented. That helps you build better systems and write better code.

Examples: A template language, a pattern matching language, test framework, protobuf serialization, a PEG parsing language, Unix tools like grep/sed/xargs, an event loop library based on Tornado, a package manager, static website generator and related tools, a web server, a web proxy, web framework, etc.

In addition to writing something from scratch, I also find tiny code on the Internet and play around with it, like tinypy, OCamlLisp, femtolisp, xv6, etc.

[1] http://aosabook.org/en/index.html


Plug for Mathematica, which after its installed you can do deep learning on in one or two lines, with GPU support on all three platforms with no setup. Very concise. Getting fairly competitive in features with other high level declarative frameworks as of 11.1 (which was just released today). Very nice visualizations thanks to being in Mathematica. The language is of course closed source, paid software. Many universities have site licenses, so there is a large built-in audience who can use it in courses etc 'for free', home licenses are comparable to photoshop or whatever.

See http://reference.wolfram.com/language/guide/NeuralNetworks.h..., also look at Examples > Applications under http://reference.wolfram.com/language/ref/NetTrain.html for some worked examples. Fun example of live visualization during training (very easy to do, will get even easier in future versions): https://twitter.com/taliesinb/status/839013689613254656


As someone who has crafted thousands of complex regular expression rules for data capture, here is my take:

1. This is a fine idea to aid regex newbies in crafting their expressions. I see this as a gateway instead of a longterm tool. The expressions won't be optimal (by no fault of the tool), nor will they likely be complete, but that's not the point. If it helps reduce the barrier(s) to adoption of regular expressions, then I can heartily support it.

2. To the people who say they use regular expressions only a handful of times a year, thus it's not worthwhile to invest time in learning the syntax, I offer this: once you know it, you will use it far more often than you ever expected. Find & replace text, piping output, Nginx.conf editing, or even the REGEXP() function in MySQL. It's a valuable skillset in so many environments that I expect you will use weekly, if not daily.

3. Ultimately regular expressions, like everything, are extra difficult until you know all of the available tools in the toolbox. At that point, you may realize you wrote an unnecessarily complex expression simply because you didn't know better.


"I've never understood why jails didn't take off. I guess maybe since linux took off and the bsds didn't, but they're just nice and elegant."

The very first VPS[1] provider, JohnCompanies, was built entirely on jail (and FreeBSD 4.x).

At the peak we had over a thousand FreeBSD jails running for customers all over the world.

In the end, fancy provisioning and fine-grained resource tuning (with products like Virtuozzo) won out. Although JC is still operating and still provides jail-based VPS.

The offsite backup infrastructure that was built for JC customers became a standalone company in 2006 and was named "rsync.net".

[1] The term "VPS" had not been coined in mid-2001 so I made up the term "server instance" which didn't stick.


This is directly analogous to memory allocation[1].

Your day is the heap of all available memory.

"Manager time" is a small allocation. Since the manager's heap only uses blocks this size, she can do this all day without any problems or fragmentation.

"Maker time" is a large contiguous block. If the heap is empty, you can allocate them without any problem. But stick one manager-sized block in the middle and now you've split your heap such that the total time available is large enough, but it's not contiguous. Classic heap fragmentation[2].

pg's solution is the classic one in memory management: memory pools[3]. Partition your heap into regions for different-sized allocations. Small allocations always go in one region, large ones in another. "Office hours" is your small block allocator.

I often hear people say, "Look, I program in <high level language> with <giant standard library> all day. Why should I care about this old low level CS stuff? It's already been done for me!"

This is a great example to me of how those old concepts are still useful once you realize they are only a single analogy away from being about your new high level problem.

[1]: https://en.wikipedia.org/wiki/Memory_management [2]: https://en.wikipedia.org/wiki/Fragmentation_(computing)#Exte... [3]: http://gameprogrammingpatterns.com/object-pool.html


They're likely using one of the world's oldest business models: "Buy things for $1, sell them for $2."

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You