PNG netball U21 squad to be named next week


first_imgPNG Netball presidentJulienne LekaMaliakitoday said selectors were still finalizing the squad that will be selected from last month’s 2015 Digicel national netball championships.She said the focus of the championships was to select the strongest team for the championships in Auckland, New Zealand.The last championships were held in 2013 in Glasgow, with New Zealand taking the title after beating Australia.The eventfor strictly only national U21 teams is held every four years.Maliaki said a lot of raw talent had been identified at the Alotau Championships which she was very impressed with.last_img read more

Scientists help artificial intelligence outsmart hackers


first_imgThe training experiment suggests AIs use two types of features: obvious, macro ones like ears and tails that people recognize, and micro ones that we can only guess at. It further suggests adversarial attacks aren’t just confusing an AI with meaningless tweaks to an image. In those tweaks, the AI is smartly seeing traces of something else. An AI might see a stop sign as a speed limit sign, for example, because something about the stickers actually makes it subtly resemble a speed limit sign in a way that humans are too oblivious to comprehend.Some in the AI field suspected this was the case, but it’s good to have a research paper showing it, Kolter says. Bo Li, a computer scientist at the University of Illinois in Champaign who was not involved in the work, says distinguishing apparent from hidden features is a “useful and good research direction,” but that “there is still a long way” to doing so efficiently.So now that researchers have a better idea of why AI makes such mistakes, can that be used to help them outsmart adversarial attacks? Andrew Ilyas, a computer scientist at the Massachusetts Institute of Technology (MIT) in Cambridge, and one of the paper’s authors, says engineers could change the way they train AI. Current methods of securing an algorithm against attacks are slow and difficult. But if you modify the training data to have only human-obvious features, any algorithm trained on it won’t recognize—and be fooled by—additional, perhaps subtler, features.And, indeed, when the team trained an algorithm on images without the subtle features, their image recognition software was fooled by adversarial attacks only 50% of the time, the researchers reported at the conference and in a preprint paper posted online last week. That compares with a 95% rate of vulnerability when the AI was trained on images with both obvious and subtle patterns.Overall, the findings suggest an AI’s vulnerabilities lie in its training data, not its programming, says Dimitris Tsipras of MIT, a co-author. According to Kolter, “One of the things this paper does really nicely is it drives that point home with very clear examples”—like the demonstration that apparently mislabeled training data can still make for successful training—“that make this connection very visceral.” Ilyas, Santurkar, Tsipras, Engstrom, Tran, Madry By Matthew HutsonMay. 14, 2019 , 12:45 PM Country * Afghanistan Aland Islands Albania Algeria Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, the Democratic Republic of the Cook Islands Costa Rica Cote d’Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Democratic People’s Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People’s Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Martinique Mauritania Mauritius Mayotte Mexico Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Norway Oman Pakistan Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Vietnam Virgin Islands, British Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe Sign up for our daily newsletter Get more great content like this delivered right to you! Country Emailcenter_img An artificial intelligence (AI) trained on the photos of a dog, crab, and duck (top) would be vulnerable to deception because these photos contain subtle features that could be manipulated. The images on the bottom row don’t contain these subtle features, and are thus better for training secure AI. Click to view the privacy policy. Required fields are indicated by an asterisk (*) Scientists help artificial intelligence outsmart hackers NEW ORLEANS, LOUISIANA—A hacked message in a streamed song makes Alexa send money to a foreign entity. A self-driving car crashes after a prankster strategically places stickers on a stop sign so the car misinterprets it as a speed limit sign. Fortunately these haven’t happened yet, but hacks like this, sometimes called adversarial attacks, could become commonplace—unless artificial intelligence (AI) finds a way to outsmart them. Now, researchers have found a new way to give AI a defensive edge, they reported here last week at the International Conference on Learning Representations.The work could not only protect the public. It also helps reveal why AI, notoriously difficult to understand, falls victim to such attacks in the first place, says Zico Kolter, a computer scientist at Carnegie Mellon University, in Pittsburgh, Pennsylvania, who was not involved in the research. Because some AIs are too smart for their own good, spotting patterns in images that humans can’t, they are vulnerable to those patterns and need to be trained with that in mind, the research suggests.To identify this vulnerability, researchers created a special set of training data: images that look to us like one thing, but look to AI like another—a picture of a dog, for example, that, on close examination by a computer, has catlike fur. Then the team mislabeled the pictures—calling the dog picture an image of a cat, for example—and trained an algorithm to learn the labels. Once the AI had learned to see dogs with subtle cat features as cats, they tested it by asking it to recognize fresh, unmodified images. Even though the AI had been trained in this odd way, it could correctly identify actual dogs, cats, and so on nearly half the time. In essence, it had learned to match the subtle features with labels, whatever the obvious features.last_img read more