Special Issue on
Software Tools and Techniques for Fog and Edge Computing
Software: Practice and Experience (Wiley Press)
Call for Papers
The Internet of Things (IoT) paradigm promises to make “things” such as physical objects with sensing capabilities and/or attached with tags, mobile objects such as smart phones and vehicles, consumer electronic devices and home appliances such as fridge, television, healthcare devices, as part of the Internet environment. In cloud-centric IoT applications, the sensor data from these “things” is extracted, accumulated and processed at the public/private clouds, leading to significant latencies.
To satisfy the ever increasing demand for Cloud Computing resources from emerging applications such as Internet-of-Things (IoT), academics and industry experts are now advocating for going from large-centralized Cloud Computing infrastructures to micro data centres located at the edge of the network. These micro data centres are often closer to a user (geographically and in access latency) compared to the centralised cloud data centre. The aim of utilizing such edge resources is to off load computation that would have “traditionally” been carried out at the cloud data centre to a resource that is closer to a user or edge devices. This vision also acknowledges the variation in network latency from an end user to cloud data centre. Whereas the network around a data centre is often high capacity and speed, that near the user device may have variably properties (in terms of resilience, bandwidth, latency, etc.).
Referred to as “fog/edge computing”, this paradigm is expected to improve the agility of cloud service deployments in addition to bringing computing resources closer to end-users. On the one hand, the development of Fog and Edge clouds includes dedicated facilities, operating system, network and middleware techniques to build and operate such micro data centres that host virtualized computing resources. On the other hand, the use of Fog and Edge clouds requires extension to current programming models and propose new abstractions that will allow developers to design new applications that take benefit from such massively distributed systems. The use of this approach also opens up other challenges in: security and privacy (as a user now needs to “trust” every micro data centre they interact with), support for resource management for mobile users who transfer session from one micro data centre to another, support for “embedding” such micro data centres into devices (e.g. cars, buildings, etc).
The Special issue seeks to attract contributions covering both theory and practice of any of the aforementioned challenges, from the management software stack to domain-specific applications. Topics of interest include (but are not limited to):
* Data centers and infrastructures for Fog/Edge Computing
* Middleware for Fog/Edge infrastructures
* Programming models and runtime systems for Fog/Edge Computing
* Scheduling for Fog/Edge infrastructures
* Fog/Edge storage
* Monitoring/metering of Fog/Edge infrastructures
* Fog/Edge Computing applications
* Latency/locality-critical applications
* Legal issues in Fog/Edge clouds
* Security and privacy – including support for new cryptographic approaches
* Modelling Fog/Edge environments – e.g. using process networks, agent-based models, Peer-2-Peer systems, etc.
* Performance monitoring and modelling
* Applications of Fog/Edge Computing
Special Issue Paper Submission
This special issue invites submissions that present novel and innovative ideas. It also welcomes submissions of extended versions of the best selected papers presented in the 2nd IEEE International Conference on Fog and Edge Computing (ICFEC 2018 - http://www.cloudbus.org/fog/icfec2018/
The 2nd IEEE International Conference on Fog and Edge ...<http://www.cloudbus.org/fog/icfec2018/>
2nd IEEE International Conference on Fog and Edge Computing (ICFEC 2018) May 3, 2018, Washington DC, USA In conjunction with IEEE/ACM CCGrid 2018 (18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing)
). All submissions including invited papers will undergo the regular peer review process.
We seek submission of papers that present new, original and innovative ideas for the "first" time in SPE. Submission of "extended versions" of already published works (conference papers) is not encouraged unless they contain a significant number of "new and original" ideas/contributions along with more than 50% brand "new" material. If you are submitting an extended version of an already published conference paper, you must submit a cover letter/document detailing (1) the "Summary of Differences" between the SPE paper and the earlier paper, (2) a clear list of "new and original" ideas/contributions in the SPE paper (identifying sections where they are proposed/presented), (3) confirmation of the percentage of new material, and (4) the original conference paper. Otherwise, the submission will be "desk" rejected without being reviewed.
While submitting paper to this issue, please select “Special Issue – Software Tools and Techniques for Fog and Edge Computing” in the submission system.
Regular Issue Submission
If you have a paper on cloud computing or IoT which does not match the requirements of the Special Issue, we encourage you to submit it as a regular paper to Software: Practice and Experience. The journal has expanded its coverage to specifically include cloud computing and IoT.
Submission due date: April 30th, 2018, July 30, 2018
Notification of acceptance: July 30th, 2018, September 30, 2018
Submission of final manuscript: August 30th, 2018, November 30, 2018
Publication date: 1st Quarter, 2019 (Tentative)
A/Prof. Rajiv Ranjan (Corresponding Guest Editor), Newcastle University, UK
A/Prof. Massimo Villari, University of Messina, Italy
A/Prof. Haiying Shen, University of Virginia, USA
Prof. Omer Rana, Cardiff University, UK
Prof. Rajkumar Buyya, University of Melbourne, Australia
Storage-research-list mailing list
CALL FOR PAPERS - PDSW-DISCS '18
The 3rd Joint International Workshop on Parallel Data Storage and Data
Intensive Scalable Computing Systems (PDSW-DISCS'18)
Monday, November 12, 2018 9:00am - 5:30pm
SC'18 Workshop - Dallas, TX
### IMPORTANT DATES ###
Regular Papers and Reproducibility Study Papers:
- Submissions due: Sep. 2, 2018, 11:59 PM AoE
- Paper Notification: Sep. 30, 2018
- Camera ready due: Oct. 5, 2018
- Slides due: Nov. 9, 2018, 3:00 pm CST
Work in Progress (WIP):
- Submissions due: Nov. 1, 2018, 11:59 PM AoE
- WIP Notification: Nov. 7, 2018
### WORKSHOP ABSTRACT ###
We are pleased to announce that the 3rd Joint International Workshop on
Parallel Data Storage and Data Intensive Scalable Computing Systems
(PDSW-DISCS’18) will be hosted at SC18: The International Conference for
High Performance Computing, Networking, Storage and Analysis. This one day
joint workshop combines two overlapping communities to better promote and
stimulate researchers’ interactions to address some of the most critical
challenges for scientific data storage, management, devices, and processing
infrastructure for both traditional compute intensive simulations and
data-intensive high performance computing solutions. Special attention will
be given to issues in which community collaboration can be crucial for
problem identification, workload capture, solution interoperability,
standards with community buy--in, and shared tools.
Many scientific problem domains continue to be extremely data intensive.
Traditional high performance computing (HPC) systems and the programming
models for using them such as MPI were designed from a compute-centric
perspective with an emphasis on achieving high floating point computation
rates. But processing, memory, and storage technologies have not kept pace
and there is a widening performance gap between computation and the data
management infrastructure. Hence data management has become the performance
bottleneck for a significant number of applications targeting HPC systems.
Concurrently, there are increasing challenges in meeting the growing demand
for analyzing experimental and observational data. In many cases, this is
leading new communities to look towards HPC platforms. In addition, the
broader computing space has seen a revolution in new tools and frameworks
to support Big Data analysis and machine learning.
There is a growing need for convergence between these two worlds.
Consequently, the U.S. Congressional Office of Management and Budget has
informed the U.S. Department of Energy that new machines beyond the first
exascale machines must address both traditional simulation workloads as
well as data intensive applications. This coming convergence prompted the
integration of the PDSW and DISCS workshops into a single entity to address
the common challenges.
### TOPICS OF INTEREST ###
** Scalable storage architectures, archival storage, storage
virtualization, emerging storage devices and techniques
** Performance benchmarking, resource management, and workload studies from
production systems including both traditional HPC and data-intensive
** Programmability, APIs, and fault tolerance of storage systems
** Parallel file systems, metadata management, and complex data management,
object and key-value storage, and other emerging data storage/retrieval
** Programming models and frameworks for data intensive computing including
extensions to traditional and nontraditional programming models,
asynchronous multi-task programming models, or to data intensive
** Techniques for data integrity, availability, and reliability especially
** Productivity tools for data intensive computing, data mining, and
** Application or optimization of emerging “big data” frameworks towards
scientific computing and analysis
** Techniques and architectures to enable cloud and container-based models
for scientific computing and analysis
** Techniques for integrating compute into a complex memory and storage
hierarchy facilitating in-situ and in-transit data processing
** Data filtering/compressing/reduction techniques that maintain sufficient
scientific validity for large scale compute-intensive workloads
** Tools and techniques for managing data movement among compute and data
intensive components both solely within the computational infrastructure as
well as incorporating the memory/storage hierarchy
### SUBMISSION GUIDELINES ###
This year, we are soliciting two categories of papers, regular papers and
reproducibility study papers. Both will be evaluated by a competitive peer
review process under the supervision of the workshop program committee.
Selected papers and associated talk slides will be made available on the
workshop web site. The papers will also be published in the digital
libraries of the IEEE and ACM.
### Regular Paper Submissions:
We invite regular papers which may optionally undergo validation of
experimental results by providing reproducibility information. Papers
successfully validated earn a badge in the ACM DL in accordance with ACM's
artifact evaluation policy.
### New! Reproducibility Study Paper Submissions:
We also call for reproducibility studies that for the first time reproduce
experiments from papers previously published in PDSW-DISCS or in other
peer-reviewed conferences with similar topics of interest. Reproducibility
study submissions are selected by the same peer-reviewed competitive
process as regular papers, except these papers undergo validation of the
reproduced experiment and must include reproducibility information that can
be evaluated by a provided automation service. Successful validation earns
the original publication a badge in the ACM DL in accordance with ACM’s
artifact evaluation policy.
### Guidelines for Regular Papers and Reproducibility Study Papers:
Submit a not previously published paper as a PDF file, indicate authors and
affiliations. Papers must be at least 8 pages long and no more than 12
pages long (including appendices and references). Papers must use the IEEE
conference paper template available at:
https://www.ieee.org/conferences/publishing/templates.html Please see the
workshop website for more information.
Details on reproducibility will be available on the website by July 1, 2018.
### Work-in-progress (WIP) Submissions:
There will be a WIP session where presenters provide brief 5-minute talks
on their on-going work, with fresh problems/solutions. WIP content is
typically material that may not be mature or complete enough for a full
paper submission. A one-page abstract is required.
### WORKSHOP ORGANIZERS ###
** Kathryn Mohror, Lawrence Livermore National Laboratory
** Suzanne McIntosh, New York University
** Raghunath Raja Chandrasekar, Amazon Web Services
** Carlos Maltzahn, University of California, Santa Cruz
** Ivo Jimenez, University of California, Santa Cruz
** Glenn K. Lockwood, Lawrence Berkeley National Laboratory
Web and Proceedings Chair:
** Joan Digney, Carnegie Mellon University
Storage-research-list mailing list