Skip to main content

A Conceptual Professional Assessment Model Based RDF Data Crowdsourcing

  • Conference paper
  • First Online:
Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD 2020)

Part of the book series: Lecture Notes on Data Engineering and Communications Technologies ((LNDECT,volume 88))

  • 51 Accesses

Abstract

This paper proposes a concept professional assessment model for RDF data crowdsourcing workers . The model extracts test task instances similar to crowdsourcing tasks from the standard knowledge base based on the concept hierarchy tree of RDF data crowdsourcing tasks, and uses knowledge representation to automatically build a set of options for test task instances, thereby generating a concept professionalism test task. The quality of the RDF data crowdsourcing workers is ensured by calculating the conceptual expertise of the results of the crowdsourcing workers completing the test tasks. We carried out simulation experiments, and the experimental results verified the validity of the concept professional assessment model proposed in this paper.

L. Huang—Contributed equally to this work and should be considered as co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Movie dataset comes from https://github.com/SimmerChan/KG-demo-for-movie.

  2. 2.

    ConsumableFood dataset comes from https://download.csdn.net/download/taoxiuxia/9454140.

References

  1. Ma, Y., Qi, G.: An analysis of data quality in DBpedia and Zhishi.me. In: Qi, G., Tang, J., Du, J., Pan, Jeff Z., Yu, Y. (eds.) CSWS 2013. CCIS, vol. 406, pp. 106–117. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-54025-7_10

    Chapter  Google Scholar 

  2. Jacobi, I., Kagal, L., Khandelwal, A.: Rule-based trust assessment on the semantic web. In: Bassiliades, N., Governatori, G., Paschke, A. (eds.) RuleML 2011. LNCS, vol. 6826, pp. 227–241. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22546-8_18

    Chapter  Google Scholar 

  3. Mallea, A., Arenas, M., Hogan, A., Polleres, A.: On blank nodes. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 421–437. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25073-6_27

    Chapter  Google Scholar 

  4. Bizer, C., Cyganiak, R.: Quality-driven information filtering using the WIQA policy framework. J. Semant. 7(1), 1–10 (2009)

    Article  Google Scholar 

  5. Mendes, P.N., Mühleisen, H., Bizer, C.: Sieve: linked data quality assessment and fusion. In: Proceedings of the 2012 Joint EDBT/ICDT Workshops, pp. 116–123. ACM (2012)

    Google Scholar 

  6. Welinder, P., Perona, P.: Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 25–32. IEEE (2010)

    Google Scholar 

  7. Joglekar, M., Garcia-Molina, H., Parameswaran, A.: Evaluating the crowd with confidence. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 686–694. ACM (2013)

    Google Scholar 

  8. Bernstein, M.S., Brandt, J., Miller, R.C., et al.: Crowds in two seconds: enabling realtime crowd-powered interfaces. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology., pp. 33–42. ACM (2011)

    Google Scholar 

  9. Aydin, B.I., Yilmaz, Y.S., Li, Y., et al.: Crowdsourcing for multiple-choice question answering. In: Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2946–2953 (2014)

    Google Scholar 

  10. Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)

    Google Scholar 

  11. Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 28(1), 20–28 (1979)

    Google Scholar 

  12. Acosta, M., Zaveri, A., Simperl, E., Kontokostas, D., Auer, S., Lehmann, J.: Crowdsourcing linked data quality assessment. In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8219, pp. 260–276. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41338-4_17

    Chapter  Google Scholar 

  13. Acosta, M., Zaveri, A., Simperl, E., et al.: Detecting linked data quality issues via crowdsourcing: a dbpedia study. Semant. Web 9(3), 303–335 (2018)

    Article  Google Scholar 

  14. Zaveri, A., Kontokostas, D., Sherif, M.A., et al.: User-driven quality evaluation of dbpedia. In: Proceedings of the 9th International Conference on Semantic Systems, pp. 97–104. ACM (2013)

    Google Scholar 

  15. Acosta, M., Simperl, E., Flöck, F., et al.: RDF-Hunter: automatically crowdsourcing the execution of queries against RDF data sets. Comput. Sci. 7695, 212–226 (2015)

    Google Scholar 

  16. Zurek, W.H.: Complexity, Entropy and the Physics of Information. CRC Press (2018)

    Google Scholar 

  17. Ngomo, A.C.N., Auer, S.: LIMES: a time-efficient approach for large-scale link discovery on the web of data. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence-Volume Volume Three, pp. 2312–2317. AAAI Press (2011)

    Google Scholar 

  18. Bordes, A., Usunier, N., Garcia-Duran, A., et al.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the Scientific Rsearch Project of Education Department of Hubei Province under grant B2019008.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, Z., Huang, L., Yang, L., Li, Q. (2021). A Conceptual Professional Assessment Model Based RDF Data Crowdsourcing. In: Meng, H., Lei, T., Li, M., Li, K., Xiong, N., Wang, L. (eds) Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery. ICNC-FSKD 2020. Lecture Notes on Data Engineering and Communications Technologies, vol 88. Springer, Cham. https://doi.org/10.1007/978-3-030-70665-4_5

Download citation

Publish with us

Policies and ethics