These projects are a source of fascination to me, however a persistent question for me is how do these more symbolic approaches to cognitive modelling figure in today's world of ML and data-driven AI? I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?
I ask because you clearly have some exposure to these sorts of projects - any sources you can provide would be appreciated.
I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?
If you had a good answer to that, you'd probably be well on your way to a Ph.D., if not a Turing Award. The question of symbolic/sub-symbolic integration has been a big outstanding question in the AI world for a very long time now. I don't think many people were actively working on it for quite a while, but it seems like there has been at least a small uptick in interest in that idea recently. My personal belief is that this kind of integration will be essential, at least in the short term, to achieving something like what we might actually call AGI. And while I'm hardly alone in thinking this, this position is by no means universally held. There are people (Geoff Hinton among others, if memory serves correctly) who believe that "neural nets are completely sufficient".
And frankly, in the long (enough) term that might be right. Build ANN's that are sufficiently deep, sufficiently wide, and with just the right initial architecture, and maybe you get something that develops "the master algorithm" and figures it all out on its own. I think that's probably possible in principle; but my doubt about all of that is more about how realistic it is, especially over shorter time scales.
Anyway, if you're really interested in the topic, Ben Goertzel's OpenCog system includes a strong focus on symbolic/sub-symbolic integration, and borrows a lot of ideas from some well-known cognitive architecture work (LIDA, in particular).
Also, googling "symbolic / sub-symbolic integration" will turn up a ton of sites / papers / books / etc. that go into far more detail.
I went very deep into OpenCog and finally had to concede that there just wasn't enough rigor and coordination between the compoents. Goertzel seems easily distracted by various other subjects. I realize that he has to figure out ways to fund his work, so I am not being judgemental.
In addition to symbolic and deep learning, future AI systems will most likely have a causal learning component. Judea Pearl has been working on this subject for years. http://bayes.cs.ucla.edu/jp_home.html
Good points all around. I think OpenCog has a lot of good ideas, but I won't claim that it's the "be all, end all", as of today. That said, I think to some extent the statement "there just wasn't enough rigor and coordination between the compoents" may be true exactly because that is the central challenge that still remains to be solved.
At the very least, I think reading Goertzel's books[1] and looking at OpenCog is a good introduction to the issues at hand in a general sense.
Totally agree on the causal learning thing. And that's an area that also seems to have had a resurgence of interest and activity lately.
[1]: Here I specifically mean Engineering General Intelligence, Volumes 1 & 2
This person gets it. Which is rare - do you want a job? :)
But seriously, you hit upon something which is important, but very hard to communicate/admit - a big part of making the "DevOps Transformation" happen - which is a cheesy term but it conveys something better than just DevOps - is saying _no_ to devs and traditional SysAdmins and giving them a better alternative. Unfortunately this means that SysAdmin skillsets need to be supplemented or even supplanted by SWE skillsets. But the TL;DR is yes, doing DevOps often means putting the brakes on fun.
This is why I like the 'platform engineering' meme - much better way to frame the value proposition. See: Thoughtworks' blogs and podcasts on platform-engineering-centric topics.
This is absolutely wrong - DevOps is not a job title. At least it shouldn't be, as anyone in the sector will tell you. The conflation of DevOps responsibilities and specializations with SysAdmin work is the reason why the vast majority of "DevOps" teams are...vastly unqualified.
GitOps is a deployment methodology. Sure its a buzzword, but product devs have plenty of those as well.
I _do_ agree that DevSecOps is a pointless concept - as is FinOps and other xOps crap - its become a way to throw work over the wall, which is what DevOps was trying to fix.
This endless taxonomy of terms for the people who do shit that we don't want to do is a definite drag on _actual_ acceleration of SDLC in the cloud.
You hit on the most important thing about DevOps: DevOps is not a job title - as any DevOps person will tell you. Obviously it _is_ a job title, but what they mean is that it isn't supposed to be one. Its supposed to be a way of looking at the SDLC to reduce friction, decrase siloing, and approach shipping and building software into production with the same SWE discipline that feature work is given.
The failure of the industry to actually grok this has resulted in the re-framing of DevOps as SRE - which is also misunderstood by 99.9% of engineers and managers - and more helpfully, in the rise of the Platform Engineering meme.
In my time as a platform engineer - or what devs call a 'DevOps Engineer' - which has never been my job title thankfully - I've found that people either think "DevOps" are wizards or drooling idiots. At the same time, however, many devs are unwilling or unable to consider the complexities of productionizing an application before those complexities cause delays. The point of DevOps was to catalyse cultural change, not to build a new silo for people to misunderstand and ignore.
This article touches on a couple of the problems with SRE-ism as its currently understood/practiced at a lot of organizations:
> SREs aren’t necessarily organized as part of development or IT ops teams.
SRE teams should be comprised of software engineers. Who do software engineering - specifically to tackle the thus-far intractable problem of managing operational complexity for the SDLC. It says as much in the first chapters of the SRE book. The idea that SREs are not engineers, or developers, is nuts to me. Of course, most SREs _arent_ developers, they haven't ever been in that role. Thats fine, but organizations need to understand that transposing Sysadmin into SRE doesn't magically change anything.
> No role in CI/CD for reliability engineering
Phew boy - this really underscores the weakness of the SRE meme, in my opinion; and highlights the usefulness of the Platform Engineering meme. CI/CD is the fulcrum where effective SREs/Platform Engineers can insert a _ton_ of value into the business, and if they're not allowed to have a leading role in its care and feeding then they are basically being kept from doing their jobs. I've seen this many times over. Another pathology is an organization where CI/CD is neglected in favor of shinier, more whiz-bang toys - this is just as unfortunate.
Just my two cents - but as I always say, everthing in moderation except moderation.
Not specifically about the article, but this applies to any X where you get a conclusion like: "Everyone owns X"
For those who currently care about X and they don't get enough support in their organisation, it makes perfect sense. For the rest, nothing needs to change. After all those other guys should have it.
The article issues hypothetical directives "Include all teams in testing", without any consideration as to what kind of activity might make this feasible. All teams simply reply "no thank you". And it's back to square one.
To be clear, I am not saying these suggestions are wrong. Just naive.
Years ago I saw a talk by someone leading the "innovation department" of a large national police organization in the EU. Her main point: the quickest way to kill innovation (or reliability, or performance, etc) is to make a special team that "owns" that area. This is because it signals to the rest of the teams that it is not their problem anymore, because "after all SRE is responsible for reliability".
On the other side of the spectrum, the main problem with "Everyone owns X" type initiatives is that often nobody gets measured on X but they do get measured on their other responsibilities. The predictable result is that all those smart and driven people you hired will realize that spending time on X will not get them promoted but neglecting it does not cost anything.
Personally, I would recommend listening to Thoughtworks' Techonology Podcast. Their approach in general to running in the cloud is a pretty good one, which I'd characterize as platform engineering-centric. Anything Will Larsen writes is bound to be worth looking at and thinking about in the context of scaling cloud operations and operational complexity. His concept of the 'pierceable abstraction' is _absolutely key_ in my opinion.
Additionally, the following blog posts are fairly good introductions:
I'd also like to hear about a reading list the commenter suggests for erroneously-named-but-that's-our-job-title DevOps Engineers who really want to spearhead manifest SRE changes at their workplace
This is really exciting - congratulations to the team and all the contributors.
For those who already know "what smalltalk is all about" but are looking to get into Pharo and get something done, I recommend Pharo by Example and Deep into Pharo - between those two books, any developer should be able to figure out the workflows Pharo expects you to employ to be productive.
My only advice would be to go in without preconceived notions based on your years of using other languages. And the community is super helpful.
It's a tall order. My observation with people with mastery, who switch to a new tool, is that usually: A- something external has forced the change, in which case it's hard for them to be patient/charitable. or B- they like their current tool, but have some ultra specific complaints that they hope the new tool will address whilst leaving the rest intact.
Basically, the common circumstances around these things leads to some biases.
I think one way to address this, is to simply be aware of ourselves. Another would be to ensure people use a large variety of tools early on in their education/career. The meta-skill of navigating to a new tool is something we get better at with practice.