Natural language processing and understanding (NLP/U) technologies are used across countless industries to help understand text-based documents and extract key information from them. In this live stream, José Manuel Gómez Pérez focuses on three domains in which NLP/U technologies are proving to be both novel and impactful: earth sciences, misinformation detection, and science and engineering in space.
Tune in to learn:
- How language can be used to automatically identify misinformation and fact checkers manage it more effectively
- How to extract relevant information from scientific bibliographies to assist scientists in their daily research work
- How to help space agencies like ESA enforce quality protocols and design the interplanetary discovery missions of the future.
Transcript:
Brian Munz:
Hi, everyone. Welcome to another NLP stream. And we’re happy to have you again. This is a weekly series, as you know, dedicated to the latest and greatest in NLP, aka natural language processing. I’m your host, Brian Munz. I’m the product manager and developer advocate here at Expert.
Brian Munz:
And today, we are going to be talking with Jose Manuel Gomez-Perez, who is also a works expert, who is the director of language technology research. Without further ado, I’m going to kick it over to him to jump right into it, and he’s going to show us some really cool applications.
Jose Manuel Gomez-Perez:
Okay. Thank you, Brian. Let me share my screen. Here we go. You should be seeing it now. Yeah. Okay. Thank you, everybody, for being here today. Today, we’re going to talk about NLP in action, and focusing on applications in different domains.
Jose Manuel Gomez-Perez:
Just the scope of the webinar may be a little bit different from what you may be used to. This is not about a theory of data science, artificial intelligence of NLP. Although we can organize another one to talk about those kind of things at some point in time. But today, we’re going to focus on something which we think is particularly important. And it’s about how research work can turn into practical innovations that can eventually go into the market, and make an impact there.
Jose Manuel Gomez-Perez:
The idea is that we can illustrate this through specific developments in some particularly cool domains, or at least, within very cool domains. These are the domains. Hot topics like misinformation detection, the scientific domain, particularly earth science, and space engineering, with some of the international space agencies that you can think of.
Jose Manuel Gomez-Perez:
Let’s start with misinformation. Just a quick definition about what misinformation is. The way we see it, this is as a cognitive problem. You can only see part of the world, any of us can only see part of the world. Typically, you need to rely on others, like friends, media, your neighbors, to inform you. The problem with the scale of the web is that this is tremendously amplified. There’s a lot of content. It becomes harder to curate. And in the end, you end up having gaslighting at scale.
Jose Manuel Gomez-Perez:
Another characteristic of misinformation is that it’s asymmetric. It’s very easy to produce, but it’s very hard to debunk. And misinformation can have a very big impact in society, in the economy, democracy itself. Being misinformed can lead to not taking any action, or taking the wrong action. It’s important in any direction.
Jose Manuel Gomez-Perez:
Detecting this information automatically is an extremely hard problem, with infinite corner cases. And one may argue that you need artificial intelligence really, to deal with it. You have disciplines like information retrieval, large scale data processing, deep natural language understanding, multi-modality. A lot of stuff to be combined into this in order to be able to deal with misinformation detection from a computer science point of view.
Jose Manuel Gomez-Perez:
Something that we need to consider when we are talking about misinformation is that we’re not just interested in accuracy, in identifying whether something is true or false. We are also interested in the evidence behind the reasoning. That’s why we are a little bit skeptic about this end-to-end models that can be very powerful in making these kind of predictions. But at the same time, they are opaque.
Jose Manuel Gomez-Perez:
A lot of these evidence that I’m talking about comes from fact-checkers. Who are the ones that are verifying all these claims. And just to say, that fact checking needs to be multi-lateral, interoperable and explainable. Explainability is a very important work here. I’m going to talk about some contribution that we did here in this domain, which is called Linked Credibility Reviews that was awarded in the past. And then now, we are taking into the market.
Jose Manuel Gomez-Perez:
The basic idea here is we have a pipeline, where the first thing that we do when we receive some text that is possibly misinforming, we then compose it into individual sentences, that can be claims or the URLs containing those in that text. Then we link each of these new claims that arrives to us with previously verified claims. Or at least we try to. We have a ground truth of tens of thousands of fact checked articles.
Jose Manuel Gomez-Perez:
And we try to semantically find similar claims in the ground truth, which are evidence. And we also keep try to keep track of the stance of these claims with respect to the claim that we are trying to verify. And by a stance, I mean, whether or not these ground truth supports or refutes the new claims that are arriving to our pipeline. Then we look up for credibility signals. And this can be for example, human ratings from fact-checkers, or can also be related to website reputation. And finally, we aggregate the results that we get for each individual claims and produce a unified result.
Jose Manuel Gomez-Perez:
In all these process, it’s something that is very, very important is to have a way to explicitly and transparently and interoperably represent all this information. We propose this model, which describes a credibility review as a combination of the data you are trying to review, the rating that your algorithm provides, a confidence with which this ratings being provided, and the provenance of the evidence that you are using in order to verify this claim.
Jose Manuel Gomez-Perez:
Important things about this framework is that provenance is mandatory. It has to be included always. The rating on the confidence are subjective, but it’s publicly and explicitly represented. Everybody, other people can verify this opinion. This model has been proposed as an extension of schema.org by the W3C. It’s open and interoperable in that sense.
Jose Manuel Gomez-Perez:
As a consequence of using this kind of explicit way of representing credibility reviews, we have an evidence base and explainable predictions. For example, for this tweet here, US representatives agreed to elicit UN gun control. Our algorithm determines that this is a not credible tweet because it has one of the sentences, which is a particularly least credible sentence here in this text, agrees with the claim, “Seattle police begin gun confiscations, no laws broken, no warrant, no charges,” that is not credible. And it was already fact checked by PolitiFact. Since it agrees with that, this whole text is determined to be not credible as well.
Jose Manuel Gomez-Perez:
We have a textual explanation. And at the same time, we have an evidence graph, which is very cool because it allows you to go back to the whole way that your algorithm has processed this information in order to come up with this outcome. For those interested, you can go and see some of the applications that we have created on top of this. Like this one, where we were monitoring Twitter activity during the COVID pandemic, we accumulated about 16 million tweets. And we produced this dashboard that is accessible for everybody to see.
Jose Manuel Gomez-Perez:
Okay, that was about misinformation. Let’s go to another domain, which is related to earth science. A couple years ago, Yolanda Gil, who was the president of the AAAI Association, which is the Association for the Advancement of Artificial Intelligence. In her presidential address in the conference, gave this talk about whether or not AI will be able to write the scientific papers of the future.
Jose Manuel Gomez-Perez:
And she was referring to the increasing complexity in which the scientific enterprise happens to be from the early days, whether there was just a single authorship to co-authorship large number of authors. And now, finally a community as an author where we have many different people contributing to scientific results.
Jose Manuel Gomez-Perez:
And what this means is that science is becoming very complex. And if the society have some kind of a way to enable scientists to cope with all the information that is related to their research. And that’s what we are working on. In the context of product reliance, which is founded by European Union, what we are doing here is to develop artificial intelligence that assists scientists by enriching and understanding scientific content. And by scientific content, I mean, for example, scientific papers, technical reports, field notes, scientific diagrams, figures, tables, all that type of information.
Jose Manuel Gomez-Perez:
What we did in the context of a project was to produce an API with services for the analysis of this kind of documents. It’s particularly interesting because I mean, you cannot just use any NLP system in order to do these things because the scientific [inaudible 00:10:46] and particularly for each of the disciplines that you may be interested in, like in this case, our science is very specific. It’s a very specific lexicon. You need to work on systems, models that are aware of that terminology.
Jose Manuel Gomez-Perez:
And that’s what we are doing here. And here, for example, what we offer is an API that allows you to extract programmatically for many scientific tests in our science, the main sentences, the main dilemmas, the main phrases, the main concepts in the text. And we are also offering access to different entities. We are able to produce the names of the authors, the places that are described in the scientific publication, organizations, dates, these kind of things.
Jose Manuel Gomez-Perez:
And we are combining in a hybrid way, different approaches to NLP. For example, we have rule-based taxonomies that are able to classify the topics that a particular piece of text, scientific text is about. But we are also combining this with algorithms based on deep learning, neural networks, language models, et cetera, depending on the application and on what problems we want to solve. This is also publicly available. I encourage you to go there and give it a go.
Jose Manuel Gomez-Perez:
It’s actually deployed in the European Open Science Cloud, which is an effort to make all these technologies and data accessible by the scientific community. And here you have a number of resources as well that you can check. Our services are in the marketplace here in the European Open Science Cloud for any scientist to use, or any computer scientist that is working in science.
Jose Manuel Gomez-Perez:
Actually, it’s already used by the science community. For example, in this system, which is called ROHUB, what they are doing is to invoke our APIs, to analyze aggregations of scientific information, where there is a lot of text typically. If you look at here where it says, “Discover metadata,” all this metadata has been produced by our services by analyzing the content of these aggregations of scientific information. In this case, it’s a report about the eruptions of the Virunga Volcanos.
Jose Manuel Gomez-Perez:
Okay, that was our second block. And we are now going towards the space engineering, the third domain of application that we are going to talk about today. This work was done in the context of a project with a European Space Agency. They were very interested in understanding the benefits and limitations of text analysis and natural language understanding in different aspects of the space system lifecycle. And particularly, they were interested in mission design and quality assurance, which are key aspects of their daily work.
Jose Manuel Gomez-Perez:
Like I said before in the previous block, one of the hardest challenges that we have in the scientific domain is that scientific lexicon is highly specialized and domain specific. And that’s particularly true in space science and engineering domain. Very, very specialized language, very domain specific. What this means is that you can reuse existing resources, but you will need a heavy amount of customization in order to work optimally.
Jose Manuel Gomez-Perez:
Another problem that we run into this period very early on, is that there is a lack of label data, which means that if you want to use material learning models that work based on a supervised training, this is going to be very difficult. Because there’s basically no data that you can use it for fine tuning the models for specific tasks like magnet classification and MTT recognition, all these kind of things.
Jose Manuel Gomez-Perez:
And then another challenge, or limitation that we had to deal with here was that all the models, all the software, everything that came out of the period needed to be released as open source. What this means is that we could not use any proprietary solution in the period. We need to resort to existing approaches that are common to the NLP and [inaudible 00:15:59] community from an open source point of view.
Jose Manuel Gomez-Perez:
The question that we made ourselves is how far can we go? How far can a strategy based on transfer learning? Which means reducing existing pre-trained language models and generally available data sets. How far can we go with that? How far can that take us? And the answer is that we know that there are still many gaps to cover. They took us reasonably far.
Jose Manuel Gomez-Perez:
In order to do this, we focus in two use cases. One of them related to the ESA European Space Agency Concurrent Design Facility, and another one related to quality assurance or quality management systems. The Concurrent Design Facility, what they do is basically to produce this kind of CDF reports with a design mission configuration.
Jose Manuel Gomez-Perez:
And for every mission that the European Space Agency sends to space, they do one of these reports, which are super long documents, very technical, very complex. Where basically, the visibility of the mission is analyzed. Here you can find all types of things, like a mission description, systems and service model embedded systems, the payload of the mission, description of a ground segment and operation or technical research assessment, all kinds of stuff.
Jose Manuel Gomez-Perez:
And also, since these documents are so long, you cannot really use a typical information retrieval approach to look for information related to them. We propose an approach at this space on question answering. And this is the application of natural language processing and understanding that we applied in these use cases. The idea was not just to retrieve the document that is relevant for a particular query or a particular question, but also to retrieve or produce the exact answer to the question from that particular document.
Jose Manuel Gomez-Perez:
Okay. That’s one use case. And the other use case that we dealt with in this period was related to question generation and in the context of quality management, quality assurance. For the agency, this is an area that is very, very important. Anything that goes wrong can end up in losing tremendous amounts of money and eventually even lives. And here I have a picture of the explosion of the first Ariane 5 flights in 1996. You can imagine.
Jose Manuel Gomez-Perez:
They take it very seriously. And some things that they do in order to make sure that everybody is aware of the quality management procedures is to produce quizzes that they use for their evaluation. The quiz is based on quality management documents, and these are currently, they are made by hand. What they wanted to see is that we could somehow come up with a way to automatically generate these quizzes from the quality management documents. And that’s what we set ourselves to do.
Jose Manuel Gomez-Perez:
We produced two systems in this project. One for each of these selected use cases. There were many other use cases, but this were the two that were more interesting for the agency. One of the systems is called SpaceQA, which is basically a space question answering. This is the first system of this kind that has been developed in the space domain. And here, what we did was to use model this problem as an open domain question and answering problem. Where first, you have two main stages, a retriever stage, and then a reader stage or reader model.
Jose Manuel Gomez-Perez:
The retriever, what it does is it takes the question and then it goes to the collection of documents and retrieves the passages that are look promising in order to answer that question. And then what the reader does is, okay, it takes those passages and the question and tries to extract the answer to that question from each of those passages. And then we have a ranking with confidence values on the different answers that are extracted.
Jose Manuel Gomez-Perez:
In order to do this, we try different approaches. We applied a lot of narrow approaches for both the retriever side and the real side. For the retriever, we also tried algorithms like BM25 that is used in Elasticsearch or Solr. And also, we use ColBERT or co-condenser, which has our neural approaches for information table. And for the reader, we use RoBERTa-Large, which is a transformer language model, a fine tune on a generic data set for extractive question answering of this color squad.
Jose Manuel Gomez-Perez:
Okay. This is what the demo looks like. What you can see on the left is a question like, “What is the rover expected distance to traverse?” And here, what we have is the passage that the system has extracted. And in green, you can see the specific answer, which is 255 meters. On the right-hand side, you can see other examples of questions and answers that are produced by the system.
Jose Manuel Gomez-Perez:
Like for example, “Which launcher will Athena use?” “Ariane 5.” Or number seven, for example, “When can dust storms occur during the MarsFAST mission?” “At any time.” This is very powerful because it doesn’t require any kind of encoding or recognition of the type of question. The underlying language model is able to understand this, and it gives you a tremendous flexibility. You don’t even need to represent all this information on top of any of that. We leverage the knowledge that has been encoded in the language model during pre-training phase.
Jose Manuel Gomez-Perez:
Okay. And similarly, we have a SpaceQQuiz, space quiz, which is the system that we developed for automatic quiz generation from quality procedure documents. And here, basically what you have, what you do is you have a quality management procedure document. We extract the passages from the PDF. Processing the PDF is itself quite complex. Then we send these passages to a question generation language model, which in our case, we use T5 and BART as generative models. And this produces a number of questions from the text.
Jose Manuel Gomez-Perez:
At the same time, we run these questions through our question answering model, the one that we saw before. And see if this question can be answered or not automatically. If it cannot be answered, we do not include it in the quiz. And here you have some examples of questions that are generated from the text. For example, “Who can issue a supplier waiver?” “OPS project manager, or service manager.” “What does the leader of the operator’s team to do with the raised anomaly report?” Bear in mind that these questions are automatically generated by the model, not by a human. And the answer is, “Performs a preliminary review.” “Who chairs the Software Review Board?” “The owner of the software.” And so on and so forth.
Jose Manuel Gomez-Perez:
Okay. This also led to a number of publications in very prestigious conferences, like for example, SIGIR, which is the Conference on Research and Development in Information Retrieval, this work will present next month actually. And the space quiz, the system will be presented in the International Natural Language Generation Conference, which will be in a couple months as well. Very successful in that sense. That was the end of the talk. Thank you very much. And I’ll be happy to take any question.
Brian Munz:
I had a quick question about the quiz in particular, because that seems that it’s a very interesting, new way of education in terms of must be scary for the person taking the quiz because it could literally be about anything. Is there a sense of waiting? Or how do you ensure that the questions being produced aren’t just a string of what is the width of this particular element? And it’s just 10 of those may not be the best indication for a larger bit of knowledge.
Jose Manuel Gomez-Perez:
Yeah. Yeah, no, actually what we do is we produce questions. We accompany those questions with a confidence value of how good within the quality is, and that is produced by the model. And then, what we did was to create a web interface that lets the person in charge of releasing the quizzes to validate the questions that he or she may be more interested in. In that sense, we make sure that what goes out in the end is really, really good quality.
Brian Munz:
Yeah. Makes sense. It’s very interesting. We do just have a question about if you’re able to share the slides after the presentation. We can address that on the various platforms as well as on LinkedIn.
Jose Manuel Gomez-Perez:
Sure. No problem. Yeah.
Brian Munz:
Great. But yeah, thanks a lot for sharing this. This has been extremely interesting. And I love seeing all of the real world use cases that are outside of the more commonly known use cases for NLP. It’s really interesting to see where things are going. Thanks for your time.
Jose Manuel Gomez-Perez:
Yeah. I mean the interesting thing with all these use cases on the mains is that they come from our research work that we are doing in the lab. Originally, these were research projects that then evolve into innovation projects that the partners that we were working with in those brands became customers.
Brian Munz:
Yeah, yeah, yeah.
Jose Manuel Gomez-Perez:
Because they saw the possibilities of a technology and now they are interested in bringing them in their house.
Brian Munz:
Yeah, exactly. Sometimes the research projects can be a bit broad or maybe too far in the future, where it’s interesting to see where it has a pretty solid application right out of the gate. It’s really cool to see that.
Jose Manuel Gomez-Perez:
Yeah. I think it was a very interesting challenge. For example, in the case of the last blog, space engineering, all these limitations that we had, we could not use any of the proprietary technology. We did not have label data to train or fine tune material learning models to see how far we could go with what is currently available out there, available to anyone out there. The trick there was to get to know what do we need to do from an engineering point of view in order to make those resources really work well for the customer.
Brian Munz:
Yeah, exactly. Exactly. And sometimes out of those limitations, you end up doing things that you wouldn’t have thought to do before, and that makes everything better. Yeah, I think again, thanks for sharing that. This was extremely interesting. And hopefully, we’ll see more from you in the future. Thanks again for presenting this.
Jose Manuel Gomez-Perez:
My pleasure.
Brian Munz:
Super Interesting. [inaudible 00:28:14]. Great. Yep. Yeah, with that, I just want to thank everyone for tuning in. And again, thanks to Jose for sharing his interesting work. It’s always good to hear from someone who’s out there pushing the boundaries.
Brian Munz:
If you have any followup questions, please leave them in the comments below and we’ll get back to you. Again, we can share the presentation. And if you enjoyed this episode, next week, we have… Oh right. Next week, we’re going to have solving crossword puzzles with Webcrow AI. If you Google Webcrow AI, you’ll see it’s a very interesting project that was made to solve cross puzzle. We’re going to have someone come and explain the intricacies of that. Just make sure to join us. And until next time, I’ll see you later.