It was very easy to find wills for my deceased relatives however in trying to set up an account to see my national insurance record it wants a UK postcode that I do not have since I have lived abroad for many years.
There are several nationalities belonging to the third world for whom this is a godsend (not this one particularly, but others in the region). As an Indian national, I have to undergo a painful process to procure a tourist visa, which is often valid only for a few days. Something like this which let's me work remotely for clients based over Europe and North America, and let's me travel through Schengen without any stress is really a game changer.
Correct. It is very explicitly targeting tourists who want to stay for more than 90 days, up to 180 days. Basically this is a convoluted method of extending tourism to rich tourists for a longer (specifically, a double) period of time to have them pump more of their money into the country. That's all.
This is probably going to be the most ridiculous suggestion on this thread, but since you mentioned Latin American and African cities:
Have you thought of India? the major South Indian cities (Bangalore, Chennai, Hyderabad) have large tech scenes, with Bangalore probably have one of the largest in the world. Many major US and EU employers have remote workers and/or offices here. The weather is certainly warm, perhaps even too warm :P
I'm from one of these cities and spent several years living in Europe. It wasn't until I moved from continental Europe to the UK that I found a comparable restaurant scene to my home city. Beaches, deserts, forests, and mountains are all just a short flight away. Not sure what diversity means to you though, so I might get this one wrong.
Uber, Ola, and other alternatives to cars are preferred by many, if not, well, you could always get a car with a driver, it's quite affordable on a developer wage. All three cities are building up their metro systems as well.
Of course, life in India doesn't suit everyone. There are several negatives, which I guess everyone knows. As I said, it's probably the most ridiculous suggestion, but here it is.
A big problem with this kind of massively multilingual machine learning research is that the researchers in question know almost nothing about most of the languages they're dealing with. They also grouped Malayalam with Malay. (Though they also say that they focused on languages that get the most translation requests, so maybe this is down to users getting confused about which language they want.)
Their parallel sentence mining project LASER also has problems that are obvious when you know the languages involved. Some time ago I looked at their most confident matches for English-Chinese and briefly thought I was looking at the least confident ones, because Bible quotes were paired with random snippets in Classical Chinese. I think their embedding model was confused by the archaic language.
So I'm glad they also used human evaluators and not just BLEU scores, but I'd've really liked to see a human evaluation of their training data. I think it's possible that the model can average out noise to produce better garbage when you put garbage in, but it might also get completely confused and produce worse garbage. With their testing setup, it's impossible to tell whether more data or better data is needed to improve the performance of this model.
Some of the assumptions about language in this paper are just total junk lol... this one is particularly good "...and for the
rest, overlapping vocabulary is a good proxy for similar languages" - this is so wrong I don't even know where to start. The grouping of language by family is also bizarre, the genetic groupings they give for each language are at all sorts of different levels. They say that cultural and geographic proximity was also a factor in grouping, but e.g. the Mongolic and Kra-Dai families have essentially nothing in common apart from the fact the people who speak them look sort of similar to a European. Grouping the Afroasiatic languages Somali and Amharic with the Niger-Congo set also seems like the only criterion was the physical appearance of the speakers...
There is also no way for a reader of the paper to judge the effectiveness of the algorithm. They cite this evaluation of "semantic accuracy", but nothing about the design of the task, participant selection, example data.
This paper is pretty much junk science. Even the reference section is amateurishly formatted
I don't know anything about the situation there, but it might still make sense to group it if it's in the same "linguistic area" (see https://en.wikipedia.org/wiki/Sprachbund ). E.g. the Apertium translator from Northern Saami to Norwegian is very useful since both languages – though from very different families – are spoken in the same country and speakers have had millennia of contact, so there's more translated text available than you'd otherwise expect from such different languages and there's need for more translations.
Location: Munich
Remote: Yes
Willing to relocate: Yes (DE/CH/UK)
Technologies: C++, Java, Python, Android, PyTorch, Tensorflow, CUDA, Git, OpenCV, GCP, hobby projects with Golang
Résumé/CV: https://mayankpatwari.de/
Email: mpatwari94 + work [at] gmail [dot] com
I'm looking for for positions in Health AI, specifically machine learning in medical imaging. I've spent several years as part of R & D at a large CT manufacturer. I originally studied Biomedical Engineering, therefore I'm interested in the MedTech space, but open to other opportunities. Have also worked in development of Android health CV apps.