Search Results
619 results found with an empty search
- News Feature: University Evaluation in the UK
June 27, 2023, #44 In a Science Insider post titled “‘Quietly revolutionary’ plan would shake up the way U.K. universities are evaluated”¹, Cathleen O’Grady writes that “United Kingdom’s four national funding bodies”¹ are changing “the way the labor-intensive Research Excellence Framework (REF) exercise defines ‘excellence’.”¹ Under these changes planned for the 2028 evaluation round “Research culture will get more weight, with less emphasis on publications.”¹ O’Grady writes that previous versions of the REF “have placed too great an emphasis on the number of ‘outputs’—peer-reviewed papers and other publications”¹, effectively prioritizing publication numbers “over long-term projects or contributions such as software development”¹ The new approach would alter the weighting so that “25% of each university’s score will rest on its assessment of ‘people, culture and environment,’ up from 15% in the 2021 REF.”¹ Publication numbers are not an ideal measure of quality, but they do have the virtue of being relatively transparent and equally clear criteria could be hard to find. One of the lead persons for this change appears to be “Catriona Firth, policy lead for research and culture at the national funding body Research England”¹, who “says she and her colleagues will be looking for ‘robust indicators and evidence’ that universities can submit so the assessment ‘goes beyond institutions making claims about how good they are.’”¹ Looking for robust indicators is good, but actually naming them may be harder. Unequal treatment of staff appears to be a problem with the current system. According to Firth the current system “created inequalities and low morale within departments, and incentivized universities to carefully time the end of short-term contracts to keep certain staff out of their REF submissions.”¹ It is never easy to change an evaluation culture. Institutions that were successful in the old culture may be uncomfortable with the new criteria, especially if the criteria are vague. Nonetheless, this change could have positive effects, because many countries and institutions implicitly or explicitly use the REF as a model. Having the UK take the lead in moderating the pressure to publish may also reduce the number of dubious publications around the world. 1: Cathleen O’Grady, “Quietly Revolutionary” Plan Would Shake up the Way U.K. Universities Are Evaluated’, Science Insider, accessed 22 June 2023,https://www.science.org/content/article/quietly-revolutionary-plan-would-shake-way-u-k-universities-are-evaluated.
- News Feature: Data Falsification in an Article about Dishonesty
July 04, 2023, #45 In the blog “Data Colada: Thinking about Evidence and Vice Versa” blog authors Uri Simonsohn, Leif Nelson, and Joe Simmons wrote an “introduction to a four-part series of posts detailing evidence of fraud in four academic papers co-authored by Harvard Business School Professor Francesca Gino. […] In the Fall of 2021, we shared our concerns with Harvard Business School (HBS). Specifically, we wrote a report about four studies for which we had accumulated the strongest evidence of fraud.”¹ The authors do not believe that Gino’s authors were involved in collecting the data. One of the studies with fake data was itself a study on dishonesty. The problem was not with the data collection, but involved tampering afterwards. One goal of the study was to find out whether signing a reporting form at the top or at the bottom made a difference in the honesty of the results. A problem signal was that the data were not sorted correctly. Simonsohn et al. write “There is no way, to our knowledge, to sort the data to achieve this order.”¹ Someone had apparently changed eight observations so that they now gave a very strong result in the predicted direction. “A little known fact about Excel files is that they are literal zip files, bundles of smaller files that Excel combines to produce a single spreadsheet” and a “calcChain” tells Excel what order to use when processing the data.¹ The authors used the “calcChain to go back and see what this spreadsheet may have looked like back in 2010, before it was tampered with”.¹ The Data Colada blog gives an example of how the manipulation took place. The manipulation was technically clever and not difficult to do. One result is that “Gino has gone on ‘administrative leave’, and the name of her chaired position at HBS [Harvard Business School] is no longer listed.”¹ The blog also “heard from some HBS faculty that Harvard’s internal report was ~1,200 pages long…”¹ Assuming all the evidence is true, key questions remain: why would a person with a safe job and good reputation falsify data, and what in our academic incentive structure makes the temptation worthwhile? NOTE: Data Colada has three other articles about data falsification involving Francesca Gino: "My Class Year Is Harvard" (LINK) "The Cheaters Are Out of Order" (LINK) "Forgetting The Words" (LINK) 1: Lief Nelson, Uri Simonsohn, and Joe Simmons, ‘[109] Data Falsificada (Part 1): “Clusterfake”’, Data Colada, 17 June 2023, http://datacolada.org/109.
- News Feature: AI Claim Debunked
July 18, 2023, #46 In an article in the Chronicle of Higher Education on 7 July 2023, Tom Bartlett writes about “‘A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ.”¹ It seems that “15 authors, including several MIT professors”¹ wrote a study that “found that ChatGPT, the popular AI chatbot, could complete the Massachusetts Institute of Technology’s undergraduate curriculum in mathematics, computer science, and electrical engineering with 100-percent accuracy.”¹ The students found “‘glaring problems’ that amounted to, in their opinion, allowing ChatGPT to cheat its way through MIT classes.”¹ The students who collaborated on the critique were Neil Deshmukh, Raunak Chowdhuri, and David Koplow. The more they looked at the paper, the more they had doubts about the methodology and ultimately about the data. “The study used what’s known as few-shot prompting, a technique that’s commonly employed when training large language models like ChatGPT to perform a task.”¹ But the “examples were so similar to the answers themselves that it was, they wrote, ‘like a student who was fed the answers to a test right before taking it.’”¹ When the authors posted their critique, the positive response to their comments surprised them. It seems that some of the paper’s authors had not expected the paper to be posted as a pre-print. Blame seems to fall on “Iddo Drori, an associate professor of the practice of computer science at Boston University.”¹ “The problems went beyond methodology. Solar-Lezama says that permissions to use course materials hadn’t been obtained from MIT instructors even though, he adds, Drori assured him that they had been.”¹ This may not be the only problematic example. A paper Drori “published last year in the Proceedings of the National Academy of Sciences”¹ made similar exaggerated claims. “Ernest Davis, a professor of computer science at New York University…” provides a balanced perspective: finding the right formulas is one thing, but “deep understanding may well take considerably longer”¹. This story shows how easily the hype about AI can lead to unreliable conclusions. It is also an example of how easily the search for fame can seduce even top professors into becoming involved with unreliable scholarship. 1: Tom Bartlett, ‘A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ.’, The Chronicle of Higher Education, 7 July 2023, https://www.chronicle.com/article/a-study-found-that-ai-could-ace-mit-three-mit-students-beg-to-differ.
- News Feature: Knowledge Diffusion at Academic Conferences
July 25, 2023, #47 Misha Templitskiy, Soya Park, Neil Thompson, and David Karger wrote an Impact of Social Sciences blog post on the topic of “Does anyone learn anything new at conferences? Measuring serendipity and knowledge diffusion at academic conferences”¹. Specifically, they ask: “What are the benefits of conferencing and are they worth it?”¹ Finding a scholarly answer is, as they admit, not easy. “One area where evidence is accumulating is on the networking function of conferences. For example, in an experiment at a medical school Boudreau and colleagues² found that placing two scientists in the same room for a networking event – a common occurrence at conferences – increases the probability that they’ll co-author a grant proposal by 75%.”¹ How much people learn at a conference is a complex issue, since accessibility to information has increased internationally. The authors ask: “[i]s serendipitous diffusion common enough that individuals should take it into consideration when deciding whether to attend a conference?”¹ They claim to have “unusually strong evidence that conference attendees really do learn a lot in presentations, and that a sizable fraction of the diffusion is serendipitous, i.e. the attendees did not plan on learning the ideas. … Compared to papers presented in timeslots that individuals could not attend due to scheduling conflicts, they cite papers in presentations without conflicts about 50% more often.”¹ The method relied on “liked” and “not liked” markings in the scheduling tool confer to determine whether people listened to a presentation and how they valued it. The authors conclude “[u]sing this ‘scheduling conflicts’ research design, we find that the diffusion function of conferences is substantial. … To our knowledge, this is the first time that the contribution of serendipity to diffusion has been quantified.”¹ Ultimately they ask “are the in-person conference benefits worth the costs, particularly when taking into account accessibility?”¹ Cost is an issue that every organisation running a conference ought to consider, which is why the iSchools routinely offer virtual conferences for those with insufficient travel budgets, but whether these research results also apply to virtual conferences is an unanswered question. The evidence is valuable but needs further confirmation. 1: Misha Teplitskiy et al., ‘Does Anyone Learn Anything New at Conferences? Measuring Serendipity and Knowledge Diffusion at Academic Conferences’, Impact of Social Sciences (blog), 19 July 2023, https://blogs.lse.ac.uk/impactofsocialsciences/2023/07/19/does-anyone-learn-anything-new-at-conferences-measuring-serendipity-and-knowledge-diffusion-at-academic-conferences/. 2: Kevin J. Boudreau et al., ‘A Field Experiment on Search Costs and the Formation of Scientific Collaborations’, The Review of Economics and Statistics 99, no. 4 (1 October 2017): 565–76, https://doi.org/10.1162/REST_a_00676.
- News Feature: Ghost-written Peer Reviews
August 1, 2023, #48 On 26 July 2023, Laura Feetham wrote a guest post in the Scholarly Kitchen with the title: “Ghost-Writing Peer Reviews Should Be a Thing of the Past”¹. As she notes, a ghost-written peer review “potentially misrepresents the expertise of the individuals involved”¹. Such ghost-writing involves the ethical issue of giving credit to all contributors: “...according to a survey², ‘70% of co-reviewers report the experience of making significant contributions to a peer review report without knowingly receiving credit’”¹. Feetham suggests a formal co-reviewing status could be one of the solutions. Another argument for acknowledged co-reviewing is not just to make the authorship of a review more transparent, but to give early career researchers a chance to learn reviewing from more experienced colleagues. She argues that “[w]ith peer review pressure on experienced researchers mounting due to the growing volume of manuscripts, co-review can also help to address the shortage of reviewers in the scientific community.”¹ A question that Feetham’s post does not address is whether the quality of ghost-written reviews is comparable to those of named reviewers. McDowell et al. (2019) write: “A lack of ‘training the trainers’ was cited as a main reason for why pairing experts with new peer reviewers failed to improve review quality in one of the few randomized controlled trials of this practice³.”² This result seems to imply that the quality of the reviews was not noticeably worse than what the named reviewer would normally provide – perhaps because the reviews appeared under that person’s name. Serious ethical questions remain about why the co-author of a review should not receive appropriate credit for doing a review. The general lack of recognition for reviewing could serve as a justification on the principle that recognition has no real value, and in the case of negative reviews, leaving off the names of early career co-authors could count as an attempt to protect them from retribution. Excuses are easy to find. In a fair world, however, a co-reviewer should at least have the option of inclusion. AUTHORSHIP NOTE. Given the topic of this News Feature, it seems appropriate to note that this author (Michael Seadle) had editorial support from Katharina Gudat. 1: Laura Feetham, ‘Guest Post — Ghost-Writing Peer Reviews Should Be a Thing of the Past’, The Scholarly Kitchen, 26 July 2023, https://scholarlykitchen.sspnet.org/2023/07/26/guest-post-ghost-writing-peer-reviews-should-be-a-thing-of-the-past/. 2: Gary S McDowell et al., ‘Co-Reviewing and Ghostwriting by Early-Career Researchers in the Peer Review of Manuscripts’, ed. Peter Rodgers et al., ELife 8 (31 October 2019): e48425, https://doi.org/10.7554/eLife.48425. 3: Houry DGreen SCallaham M (2012) Does mentoring new peer reviewers improve review quality? A randomized trial BMC Medical Education 12:83. https://doi.org/10.1186/1472-6920-12-83.
- News Feature: Open Access and an Early Warning System
August 9, 2023, #49 On 3 August Helen Branswell wrote a post in the STAT blog called: “ProMED, an Early Warning System on Disease Outbreaks, Appears near Collapse”¹. ProMED is “a program operated by the International Society for Infectious Diseases”¹ and played a role in providing early information about COVID. ProMed announced on 14 July 2023 that it has financial problems: “ProMED needs unrestricted operational funding, and we have found that those opportunities are few and far between as various regional and global surveillance hub efforts come online. While the COVID-19 pandemic made the entire world aware of the importance of pandemic preparedness and epidemic surveillance, ProMED has been unable to capitalize on the unprecedented amounts of money that were infused into this space.”² They explain further that they have sought funding, but “many of our traditional funders have moved to project-based funding under the premise that other government and international entities will cover sustainment costs….”² Nothing worked, not even their own fundraising campaign. Their solution is to charge subscriptions. A group of moderators posted a protest about ProMed’s plan. They “announced they were suspending work for ProMED, expressed a lack of confidence in the ISID’s (International Society for Infectious Diseases) administrative operations, suggesting ProMED needs to find a new home.”¹ The unpaid moderators write “we cannot be expected to continue working on good will alone”¹. They play a key editorial function by receiving information from scientists who want to remain anonymous, that the moderators then assess and curate before sending out. Linda MacKinnon, the CEO of ISID responded to the criticism: “predictable sustained revenue stream for the massive amount of work that takes place within ProMED is not something we should be afraid to talk about”³. The situation is not unlike that of unpaid editors and reviewers for open-access journals, who do essential unremunerated work to make it possible to get information out to everyone, regardless of whether they can afford to pay for it. The trouble is that publishing the information involves costs beyond what the moderators do for free. Just providing a reliable and accessible platform involves hidden costs and maintenance. In the end some entity must step up to provide infrastructure support for valuable services like this. It could be a university. It could be a government. The question, as always, is what entity is prepared to volunteer? 1: Helen Branswell, ‘ProMED, an Early Warning System on Disease Outbreaks, Appears near Collapse’, STAT (blog), 3 August 2023, https://www.statnews.com/2023/08/03/promed-early-warning-system-on-disease-outbreaks-appears-near-collapse/. 2: ‘Promed Post’, ProMED-Mail (blog), 14 July 2023, https://promedmail.org/promed-post/. 3: Helen Branswell, ‘Organization behind ProMED Defends Move to Subscription-Based Model’, STAT (blog), 4 August 2023, https://www.statnews.com/2023/08/04/organization-behind-promed-defends-move-to-subscription-based-model/.
- News Feature: The Defamation Risk for Exposing Fraud
August 15, 2023, #50 On 9 August 2023 Kelsey Piper wrote an article for the news website Vox with the title: “Is it defamation to point out scientific research fraud? A Harvard professor accused of research fraud brings a multimillion-dollar lawsuit against the university and her accusers. What comes next?”¹ The story is about the same topic as the News Feature of 4 July 2023 about the Data Colada report on possible fraud by Francesca Gino². What makes this story instructive is not whether the Data Colada accusations were true or false, but whether possible legal consequences could discourage those investigating and reporting. Gino is suing the three authors of the Data Colada post plus Harvard University “for ‘not less than $25 million.’”¹ Her lawsuit argues that “the researchers failed to consider other explanations for the ‘anomalies’ in the data sets analyzed…”¹ Piper also provides a link to the lawsuit. She writes: “Having read her case and spoken to defamation experts, I think Gino is unlikely to win at trial.”¹ This sounds like good news for the defendants, but “it will take years — and be extraordinarily expensive — to settle the factual question in court of whether the statements are true.”¹ Truth is not necessarily the issue. Piper writes: “often, scientists whose theories are challenged are trying to resort instead to silencing their critics with the courts.”¹ Lawsuits can certainly have a chilling effect on open scholarly debate, as Piper writes: “If other researchers didn’t occasionally dig into weird results and look for signs of manipulation, many cases of data falsification would never be noticed…”¹ Checking on research results is an integral part of the modern scientific process. Such checking ought ideally to happen during peer review, but time-pressure often means cutting corners. The peer review system is imperfect, and post-review corrections are a normal part of the academic process. The risk is that a lawsuit could, as Piper writes, “make future academics who notice something off in others’ work more reluctant to speak up about it.”¹ 1: Kelsey Piper, ‘Is It Defamation to Point out Scientific Research Fraud?’, Vox, 9 August 2023, https://www.vox.com/future-perfect/2023/8/9/23825966/francesca-gino-honesty-research-scientific-fraud-defamation-harvard-university. 2: Lief Nelson, Uri Simonsohn, and Joe Simmons, ‘[109] Data Falsificada (Part 1): “Clusterfake”’, Data Colada, 17 June 2023, http://datacolada.org/109.
- Making Meaningful Impact: Using Data Science For Social Good
Imagine living in one half of a duplex. Though you maintain your part of the home, the other half of the building is abandoned and has fallen into disrepair. The roof is leaking. Unidentified critters have made a nest in the wall. Mold is creeping into the attic. Regardless of how well you keep up your personal living space, your home’s safety and value will be affected. Abandoned buildings in disrepair pose a safety hazard and can have adverse effects on the structural integrity of adjacent residences – especially among the row homes that comprise the majority of housing units in Baltimore City, Maryland. Neighbors deal with rat infestations, have difficulty getting insurance, and experience damage to their own homes because of being attached to structures with severe roof damage. These challenges are occurring at a city-wide scale in Baltimore, where the city’s Department of Housing & Community Development is tasked with assessing 15,000 vacant homes to identify and remediate roof damage. The problem is complex, systemic, and formidable. Enter the Data Science for Social Good (DSSG) Summer Fellowship at Carnegie Mellon University (CMU). Baltimore’s Department of Housing & Community Development partnered with DSSG to improve community safety and economic well-being by remediating buildings with roof damage in Baltimore. Aspiring data scientists from the DSSG team identified hazardous buildings with roof damage, then prioritized the most urgent needs for preventative interventions. Team member Chae Won Lee said one significant challenge was determining from the ground level whether a roof had damage. A second was the scope of work, with so many vacant homes in Baltimore to assess. Lee and her project teammates Justin Clark and Jonas Coelho de Barros created a successful system that used machine learning to assign a roof damage score to each address. Incorporating data that included aerial images of the entire city, manual visual assessments of historical aerial inspections, housing inspection notes, details from 311 citizen’s hotline calls, and other information provided by the city, the team developed an AI system that effectively identified and prioritized structures with the most significant roof damage. The prioritized list allows city inspectors to be more efficient and more equitable by focusing efforts on buildings with actual damage across neighborhoods and communities that are most impacted by this problem. The list can be regenerated each year with minimal manual effort. The system is more effective than relying on human observation in accurately identifying roof damage. Finally, the model eliminates potential bias by identifying roof damage equitably across neighborhoods. Ultimately, their solution has the potential to improve the lives of people in 5,000 households on city blocks with damaged roofs. The Department of Housing & Community Development (DHCD) recently garnered an innovation award for the project’s impact. The Baltimore roof initiative is just one example of the impact DSSG and CMU are having on communities locally, nationally, and internationally. In another project, DSSG Fellows worked to improve call routing for 988, the National Suicide Prevention Lifeline. Picture someone you love suffering with depression or other mental health issues. They decide to reach out for help, and call the National Suicide Prevention Lifeline. They wait and wait for a person to answer the call, but after a few minutes, they hang up the phone. An estimated 50 million people in the United States live with mental illness. The National Suicide Prevention Lifeline receives two million calls each year, which are routed to about 200 call centers around the country. Team members Tejumade Afonja, Charles Cui, Paula Subías-Beltrán, and Irene Tang worked with Vibrant Emotional Health to address lengthy call wait times that result in nearly 20 percent of calls being abandoned before the callers receive help. Subías-Beltrán said that ideally, the team would need to know the current capacity of each call center, the current waiting time for each call center, and the length of time a caller is willing to wait – but none of that data was available to the network because of its distributed nature. The team worked with the data available in the system to determine an alternative routing approach based on where each call came from, the call center where the calls were routed, the wait times, and whether the call was answered. They were able to create a model that predicted the likelihood that a call would be picked up at a specific call center at a given time. The team’s model has the potential to be better than the approach the organization had been using, and allowed the team to build a new routing simulator that can increase the connection rate for callers. That improvement means thousands of additional callers seeking mental health assistance may get the support they need in time. The change will ultimately save lives. How DSSG Came to Be Rayid Ghani, Distinguished Career Professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at CMU, created DSSG because he was looking to bridge a gap – for himself and for his students. “The intersection of what I cared about and what I was good at – that’s the work I really wanted to do,” Ghani said. As chief data scientist for the Obama 2012 campaign, Ghani had had a taste of what it felt like to do work that made an impact on society. He had an “a-ha moment” in 2013 during a talk to a group of CMU graduate students in machine learning (ML). “I was trying to tell them about the intersection of ML and social issues,” Ghani said. “What I expected was that they knew about the social problems but didn’t find them interesting. What I heard that was a little bit surprising was that they didn’t realize there was this intersection, and that we could do something about those problems with these skills.” At the same time, Ghani wondered why data and evidence were not used more often in government to solve societal problems. In talking with colleagues at government agencies and non-profits who worked on social issues, Ghani consistently heard one of three explanations. Some individuals were familiar with the concepts of ML and artificial intelligence (AI), but were not sure exactly how they could be used to address specific issues. Another group understood the capabilities of AI, but lacked staff skilled in using it. Finally, some leaders had both comprehension and staff, but were without ML and AI tools designed for their specific needs. The opportunity was ripe for partnership, and Ghani embraced it. He launched the Data Science for Social Good Initiative in 2013, while working at the University of Chicago. The program has been replicated at the University of Washington (2015), Stanford University (2019), Georgia Institute of Technology (2019), and Imperial College of London (2019), among others. DSSG at CMU: Multidisciplinary and Focused on Ethics When Ghani returned to CMU – his alma mater – in 2019 to teach, he brought the DSSG initiative with him. DSSG Fellows spend 12 weeks working with non-profits and government agencies to tackle problems affecting real communities. Their innovative solutions have real and significant impact. Following a pause resulting from the pandemic, the first class of 24 DSSG Fellows at CMU completed six projects in 2022. Though the projects ranged from reducing the risk of homelessness in Pittsburgh to improving patient care in Pakistani emergency rooms, the approach to each included some common elements. Among those key aspects: project teams are interdisciplinary. Teams consisted of individuals from different backgrounds, including computer science, ML, AI, statistics, math, economics, public policy, sociology, psychology, engineering, and physical sciences. “None of these complex problems can be solved by any discipline alone,” Ghani said. Another essential principle is that the projects are problem-driven. Operational challenges are identified through collaboration with project partners and community members. Project teams work closely with those directly involved with and affected by the problem as they strategize and implement solutions. A third – and possibly the most important – component is using the lens of ethics to approach every issue. “It’s less about ethics as a course or a lecture,” Ghani said. Instead, he explained, it’s about consistently considering the ethical implications of every decision. “What design choices are we making? What are the possible consequences of those choices downstream in three months or six months?” Ghani says the DSSG program still has room to improve. “Our projects are local, national, and international,” he explained. “We need to figure out better ways to be engaged with the communities in the areas affected by our projects.” That said, it’s hard to argue with DSSG results. Data science students have an opportunity to hone their skills with support from mentors and experts while solving real-world problems. Communities and non-profit organizations bring complex challenges that they may not have the expertise or resources to tackle independently, and receive help from some of the brightest minds in the world. And in the process, Ghani is creating a space for individuals to engage in work that they love and are good at, producing results that improve people’s lives. It turns out the work Ghani really wanted to do appeals to a whole new generation of data scientists – and he is forging the path to show them what’s possible. Original Article: https://www.heinz.cmu.edu/media/2023/February/making-meaningful-impact-using-data-science-for-social-good
- Michael Lesk, Who Helped Build the Computer Operating System Unix, Transitions to Professor Emeritus
Dr. Michael Lesk Professor of Library and Information Science Michael Lesk, a computer scientist who was among the group of scientists at AT&T Corporation’s Bell Laboratories who built the computer operating system Unix in the early 1970s, has transitioned to Faculty Emeritus effective July 1, 2023. “In addition to his many scholarly accomplishments, Michael has been an outstanding citizen of the school,” SC&I Interim Dean Dafna Lemish said. “He served as department chair and program director, and then came to the rescue several times as acting chair when the department had emergencies. He not only agreed to serve on a variety of committees when asked to, he also recognized needs and volunteered to do work even when he was not asked. During his time at SC&I we could always count on him to support the department and school in many ways. He will be missed.” At Bell Labs, Lesk created Unix tools for word processing, developed LEX for compiling in UNIX, introduced the “Lesk algorithm,” a classical algorithm for word sense disambiguation, authored the Portable I/O Library, and assisted in the development of the C language preprocessor. In addition to working with Unix software, Lesk has also spent his career working in digital libraries, information economics, and digital preservation. He conducted many of the retrieval experiments and wrote much of its retrieval code for the SMART Information Retrieval System project. After earning a Ph.D. in Chemical Physics from Harvard University in 1969, (Lesk also earned a bachelor’s degree in Physics and Chemistry from Harvard), Lesk spent 14 years as a member of the technical staff at Bell Labs. When a section of Bell Labs became Bellcore, Lesk then spent fifteen years leading the Computer Science Research Department at Bellcore, including three years as Bellcore’s Chief Research Scientist. In 1987, he took a leave from Bell Labs to spend a year as a Senior Visiting Fellow at the British Library (at University College London). Lesk left Bellcore in 1998 and until 2002 he worked at the National Science Foundation (NSF) as the Division Director in Information and Intelligent Systems. During this period of his career, he also spent several years as an adjunct lecturer in Computer Science at Columbia University. In 2003 Lesk joined the faculty of the Library and Information Science Department at the School of Communication and Information (then named the School of Communication, Information and Library Studies (SCILS)), Rutgers University-New Brunswick. During 20 years Lesk served on the faculty, he taught and mentored hundreds of undergraduate and graduate students, and he served as chair of the department from 2005- 2008. In 2009 he spent a sabbatical as a visiting researcher at Google. He has taught the undergraduate and graduate courses Digital Libraries; Digital Library Technology; Fundamentals of Data Science; Data Analytics; Data Curation/Digital Curation; Database Design and Management; Preservation; Introduction to Information Technologies; The Internet and the Information Environment; and Data Analytics. He is the author of hundreds of papers, and he has delivered hundreds of talks at universities and conferences all over the world. He is the author of the book “Understanding Digital Libraries,” (Morgan Kaufmann, San Francisco 2004), which is the second edition of a book originally titled “Practical Digital Libraries.” Over the course of his career, Lesk has received many awards and significant recognition for his outstanding contributions to Unix; his research in Information Retrieval; and his work on the design and implementation of multimedia Digital Libraries. These include the "Flame" award for lifetime achievement from Usenix in 1994; his election to the National Academy of Engineering in 2005; and his election as a Fellow of the ACM (Association for Computing Machinery) in 2006. Lesk has served as chair of the National Academies Board on Research Data and Information (2008-2010); ACM SIGIR (Special Interest Group on Information Retrieval), 1983-1985; and ACM SIGLASH (Special Interest Group on Language Analysis and Studies in the Humanities) from 1973-1975. SC&I Professor Emeritus of Library and Information Science Paul Kantor said he first heard of Lesk when Lesk was working at Bell Labs and Kantor was on a sabbatical at OCLC. “The Research Director, Martin Dillon, said to me, ‘you should come to this talk -- Michael Lesk from Bell Labs is here. He is the smartest person I've ever met.’ In a fairly long career, I have studied or worked with some nine Nobel Laureates, and I would say Michael is right up there.” Kantor said he first began to know Lesk personally while he was on a Fulbright in Norway. “Michael, who was at the NSF at the time, called me. I could not be heard. He explained that I had to make a noise before talking, so that the phone would know to listen to me instead of him. So, all those years at the phone company were not wasted. “Somewhere during those years, Lesk had the insight he shared in his famous paper: ‘Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone.’ That insight is what gives the alarming Large Language Models enough simulated understanding that they now scare the pants of all of us who work by pushing words onto paper or the Internet.” When Kantor and Lesk eventually worked together as colleagues on the SC&I faculty, Kantor said Lesk as a student mentor “knew that doing a Ph.D. is not a career choice, but a steppingstone, and he helped his students to step off that stone as quickly as possible and get on with their lives. “When Michael became department chair, he ruffled a lot of feathers by suggesting at department meetings that each of us could continue to say what we usually said, but please, only one time. He finally resorted to showing the agenda with time stamps on a screen, to help remind all of us that some of us had other things we could do with our lives. “As the chair, Michael was very fair, working hard to secure promotions for people whose work did not move him at all, provided we could show that the people who do that kind of work recognized it to be good. “Personally, it was a delight and privilege to have him as a colleague from his arrival to my retirement (and as a trusted advisor on some very interesting research since then). Conversations with Michael, at lunch, or in an office, were the high points of the job. He has an ability to see what is at the root of a problem, and to know whether it is already solved, or solvable, or not worth the time. Rutgers was fortunate to have him.” Associate Professor of Practice, Library and Information Science Marc Aronson said, “Michael is the smartest person I have ever met. He never advertises or shows off his intelligence. Just the opposite -- he is clear, plain spoken, but he cuts through the chatter and shines a light on the essence of whatever we are discussing or thinking --- he is seeing a step ahead of the rest of us. When we began talking about AI he said, ‘we've been here before: think of painting when photography was invented. There was a new way of capturing images -- which gave painters new assignments.’ Brilliant.” Associate Teaching Professor, Library and Information Science Joyce Kasman Valenza said, “For years, I knew of Michael Lesk as a pioneer in the field of computer science. During my time at SC&I, I had the honor of connecting with Michael as an esteemed colleague. Michael has the remarkable ability to cut through the noise and delve straight into the heart of important questions. I will greatly miss his wisdom and sharp wit, his talent for catching us off guard with clever insights during interviews, presentations, and meetings. Michael has made a profound impact on the lives of countless students. I wish him the very best on all the new journeys he chooses to take.” Looking ahead, Lesk said he is considering writing another book, one for a more popular audience. Learn more about the Library and Information Science Department on the Rutgers School of Communication and Information website. Original Article: https://comminfo.rutgers.edu/news/michael-lesk-who-helped-build-computer-operating-system-unix-transitions-professor-emeritus
- Call for Dissertation Students in Health Information and Libraries Journal
Health Information Libraries Journal (HILJ) is a journal of international interest to researchers and practitioners in library and health sectors, published by Wiley and in conjunction with CILIP Health Libraries Group. It is re-launching its regular feature 'Dissertations into Practice' If you have completed (or are supervising) a dissertation (or indeed any project in connection with your education/ training) related to health information, then they are keen to hear from you. Dissertations into Practice provides an opportunity to produce a small article (approx. 2500 words) from students' dissertations, and most importantly a chance to reflect upon the importance of your research for professional practice. Past articles have covered a wide range of topics including the use of social media for health information to collection development for bibliotherapy to empowering effective library users in the health sector. All papers present research aim, background context and design, main findings and conclude with a final section to discuss the implications into practice. Please browse the sample virtual issue which is freely available and which links to the Future Technologies Conference theme of Smart Healthcare. If you are interested in contributing, please contact Dr. Frances Johnson, Senior Lecturer at the Dept. of Languages, Information & Communications, Manchester Metropolitan University with a brief abstract of your project to discuss its potential for publication in HILJ.
- Assistant Professor, Teaching Track, in Data Science
University of Washington, The Information School Application Deadline: October 01, 2023. The Information School of the University of Washington seeks an Assistant Teaching Professor in Data Science. This position will be expected to teach the study, design, and development of information technology for the good of people, organizations, society, and the environment. The successful applicant will be expected to be an engaged teacher and mentor, engage in one or more domains of information technology below, and engage diversity, equity, inclusion, access, and justice in the context of teaching technical topics. The successful candidate will be expected to apply Data Science and theory in their teaching, with domain applications including but not limited to the environment, justice, or health and well-being. The successful candidate will also be expected to teach and address sociotechnical issues in one or more of the following areas (listed alphabetically below). Positive factors for consideration include, but are not limited to, candidates with expertise in one or more areas with an emphasis on the following: Artificial Intelligence and Ethics Client-side and Full-Stack Web Development Cybersecurity Databases, Data Management, and Data Curation Applied Data Science including Business Intelligence, Machine Learning, Visualization Deep Learning Design, User Experience, and Human-Computer Interaction Information Ethics/Policy/Society Mobile Application Design Development Natural Language Processing Networking and Cloud Privacy in Data Science Program and Product Management Software Engineering The successful candidate will be expected to engage in teaching in ways in which technology can be designed to minimize and mitigate its harm to people, societies and the environment (e.g., via inaccessible user interfaces, exclusionary data schemas, misleading data visualizations, exploitative data collection practices, learned discrimination in machine learning). The successful applicant will be expected to engage with social justice topics in their teaching of technical topics. Successful candidates will join a broad-based, inclusive Information School that offers multiple degree programs at the undergraduate and graduate level and is committed to the values of leadership, innovation, and diversity. The iSchool’s undergraduate major and minor in Informatics have grown to be among the most popular and most competitive programs at UW; this individual will be a key contributor to their ongoing success. Teaching professors are an integral part of the faculty of the iSchool. We provide mentorship, a career path, and opportunities for leadership in the school. This is a full-time appointment at the rank of Assistant Teaching Professor. This position includes faculty voting rights but is not tenure eligible. The University of Washington is on the quarter system (autumn, winter, spring) and teaching professors typically teach two courses per quarter (6 courses over 9 months) with summers off. Opportunities for summer teaching are often available. University of Washington teaching professors engage in teaching, mentorship, and service. Scholarship is supported and encouraged, including innovations in teaching, leadership in teaching communities of practice, and teaching mentorship. The University of Washington is a vibrant community of inclusive research and community outreach, situated between Puget Sound and Lake Washington, in the city of Seattle, on the traditional territories of the Coast Salish peoples. Seattle is a rapidly growing, dynamic, and diverse metropolitan area. The UW Information School is dedicated to hiring faculty who will enhance our inclusion, diversity, equity, access, and sovereignty (IDEAS) mission and vision through their research (as applicable), teaching, and service. As information systems and institutions serve increasingly diverse and global constituencies, it is vital to understand the ways in which differences in gender, class, race, ethnicity, religious affiliation, national and cultural boundaries, national origin, worldview, intellectual origin, ability and other identities can both divide us and offer us better ways of thinking and working. The Information School faculty are committed to preparing professionals who work in an increasingly diverse and global society by promoting equity and justice for all individuals, actively working to eliminate barriers and obstacles created by institutional discrimination. The position is a full-time 9-month teaching track appointment at the rank of Assistant Teaching Professor. Available start dates are January 1, 2024, March 1, 2024 or September 1, 2024. Applicants may find further information about the Information School at ischool.uw.edu. The base salary for this position will be $10,500 - $11,500 per month ($94,500 - $103,500 per 9-month academic year), commensurate with experience and qualifications, or as mandated by a U.S. Department of Labor prevailing wage determination. Qualifications Applicants must minimally have a master’s degree (or foreign equivalent) from a discipline that practices Data Science, including but not limited to social and behavioral sciences, biology, health sciences, computer science and engineering, and information and library sciences. Applicants must have 3 years of experience in a technical role in industry, government, or a nonprofit, or experience teaching at least one course as either the lead or assistant instructor. Application Instructions Please apply here: http://apply.interfolio.com/128179 Review of applications will begin immediately and continue until the position is filled. Preference will be given to candidates who apply by October 1, 2023. Other applications will be reviewed beginning on the 1st of each month until finalists are chosen. Select candidates will be invited for campus visits. The initial application package must include a resume or CV, a cover letter, a diversity statement (see below), and names and contact information for three references, who may be contacted for letters of recommendation. We encourage you to choose references from anyone who can speak to your expertise, your ability to teach and mentor, or your general ability to collaborate and work in diverse settings. Short-listed candidates will later be asked to do a live teaching demonstration and submit a teaching statement. Details on these will be provided at the appropriate time. Please note: The cover letter is important. Drawing on your background, please tell us about your technical expertise, examples of how you might incorporate issues of social justice into your teaching of technical material, and why you’d like to do this teaching at University of Washington Information School. iSchool Diversity Statement Guidelines Inclusion, diversity, equity, access, and sovereignty (IDEAS) are core values of the Information School, as described on our website: https://ischool.uw.edu/diversity. The Diversity Statement provides an opportunity for applicants to reflect on their research, teaching, and service accomplishments and goals that contribute to those values. We expect about a one-page statement that describes the applicants’ IDEAS efforts.
- Founding Department Chair and Professor, Computer Science
Indiana University, Luddy School of Informatics, Computing and Engineering at IUPUI Application Deadline: November 01, 2023. The Luddy School of Informatics, Computing, and Engineering in Indianapolis is seeking an exceptional scholar, educator, visionary, and dynamic leader to serve as Professor and Founding Chair of a new Department of Computer Science. The creation of this department is part of a bold vision to transform Indiana University Indianapolis into the state's next-generation urban research university. This presents a unique opportunity to shape and build a cutting-edge department, while also contributing to the strategic growth and expansion of Indiana University's premier urban research campus. The Luddy Department of Computer Science is a strategic priority for Indiana University Indianapolis [https://strategicplan.iupui.edu]. Its goal is to establish world-class and accessible computer science programs that will foster innovation, drive research excellence and economic development, and produce future leaders in the field, both for Indiana and beyond.











